Test Report: Hyper-V_Windows 19312

                    
                      759e2b673c985a1fcc212824ad6ad48c6b3dc495:2024-08-01:35593
                    
                

Test fail (38/195)

Order failed test Duration
42 TestAddons/parallel/Registry 71.96
57 TestDockerFlags 10800.595
65 TestErrorSpam/setup 189
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 33.74
90 TestFunctional/serial/ExtraConfig 280.44
91 TestFunctional/serial/ComponentHealth 120.34
94 TestFunctional/serial/InvalidService 4.23
96 TestFunctional/parallel/ConfigCmd 1.71
100 TestFunctional/parallel/StatusCmd 309.29
104 TestFunctional/parallel/ServiceCmdConnect 291.38
106 TestFunctional/parallel/PersistentVolumeClaim 418.75
110 TestFunctional/parallel/MySQL 239.25
116 TestFunctional/parallel/NodeLabels 181.64
121 TestFunctional/parallel/ServiceCmd/DeployApp 2.23
123 TestFunctional/parallel/ServiceCmd/List 8.58
124 TestFunctional/parallel/ServiceCmd/JSONOutput 7.8
126 TestFunctional/parallel/ServiceCmd/HTTPS 7.65
128 TestFunctional/parallel/ServiceCmd/Format 7.63
129 TestFunctional/parallel/ServiceCmd/URL 7.5
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.15
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 4.22
142 TestFunctional/parallel/ImageCommands/ImageListShort 59.95
143 TestFunctional/parallel/ImageCommands/ImageListTable 60.21
144 TestFunctional/parallel/ImageCommands/ImageListJson 59.93
145 TestFunctional/parallel/ImageCommands/ImageListYaml 59.99
146 TestFunctional/parallel/ImageCommands/ImageBuild 120.52
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 97.06
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 120.49
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 120.64
151 TestFunctional/parallel/DockerEnv/powershell 469.3
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 120.55
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.49
167 TestMultiControlPlane/serial/PingHostFromPods 69.22
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 46.51
176 TestImageBuild/serial/Setup 227.79
224 TestMultiNode/serial/PingHostFrom2Pods 56.99
231 TestMultiNode/serial/RestartKeepsNodes 470.78
256 TestNoKubernetes/serial/StartWithK8s 299.95
x
+
TestAddons/parallel/Registry (71.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.3998ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-m959w" [805065b5-ad94-4ec9-979f-9981375be828] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0093494s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-n8vrr" [93771d4b-d1d8-429c-8cc6-ab9493f95e4b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0109758s
addons_test.go:342: (dbg) Run:  kubectl --context addons-608900 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-608900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-608900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.2408306s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 ip: (2.6866333s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0731 21:39:50.036411    5448 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-608900 ip"
2024/07/31 21:39:52 [DEBUG] GET http://172.17.25.32:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable registry --alsologtostderr -v=1: (15.5450466s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-608900 -n addons-608900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-608900 -n addons-608900: (13.1957989s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 logs -n 25: (9.8682576s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC | 31 Jul 24 21:29 UTC |
	| delete  | -p download-only-716400                                                                     | download-only-716400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC | 31 Jul 24 21:29 UTC |
	| start   | -o=json --download-only                                                                     | download-only-224000 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC |                     |
	|         | -p download-only-224000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| delete  | -p download-only-224000                                                                     | download-only-224000 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| start   | -o=json --download-only                                                                     | download-only-756300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC |                     |
	|         | -p download-only-756300                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                                         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| delete  | -p download-only-756300                                                                     | download-only-756300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| delete  | -p download-only-716400                                                                     | download-only-716400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| delete  | -p download-only-224000                                                                     | download-only-224000 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| delete  | -p download-only-756300                                                                     | download-only-756300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-920900 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC |                     |
	|         | binary-mirror-920900                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:52768                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-920900                                                                     | binary-mirror-920900 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC |                     |
	|         | addons-608900                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC |                     |
	|         | addons-608900                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-608900 --wait=true                                                                | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-608900 addons disable                                                                | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:38 UTC | 31 Jul 24 21:38 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-608900 addons disable                                                                | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:39 UTC | 31 Jul 24 21:39 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:39 UTC | 31 Jul 24 21:40 UTC |
	|         | addons-608900                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-608900 addons                                                                        | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:39 UTC | 31 Jul 24 21:39 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ip      | addons-608900 ip                                                                            | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:39 UTC | 31 Jul 24 21:39 UTC |
	| addons  | addons-608900 addons disable                                                                | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:39 UTC | 31 Jul 24 21:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-608900 addons disable                                                                | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:40 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-608900 ssh cat                                                                       | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:40 UTC |                     |
	|         | /opt/local-path-provisioner/pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | addons-608900 addons                                                                        | addons-608900        | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:40 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:30:35
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:30:35.396169    4664 out.go:291] Setting OutFile to fd 776 ...
	I0731 21:30:35.396851    4664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:30:35.396851    4664 out.go:304] Setting ErrFile to fd 748...
	I0731 21:30:35.396851    4664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:30:35.419696    4664 out.go:298] Setting JSON to false
	I0731 21:30:35.422590    4664 start.go:129] hostinfo: {"hostname":"minikube6","uptime":537377,"bootTime":1721924058,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 21:30:35.423620    4664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 21:30:35.431993    4664 out.go:177] * [addons-608900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 21:30:35.435789    4664 notify.go:220] Checking for updates...
	I0731 21:30:35.440595    4664 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:30:35.443459    4664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:30:35.446352    4664 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 21:30:35.449391    4664 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 21:30:35.452075    4664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:30:35.456051    4664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:30:40.681017    4664 out.go:177] * Using the hyperv driver based on user configuration
	I0731 21:30:40.684755    4664 start.go:297] selected driver: hyperv
	I0731 21:30:40.684755    4664 start.go:901] validating driver "hyperv" against <nil>
	I0731 21:30:40.684852    4664 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:30:40.733357    4664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:30:40.733976    4664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:30:40.733976    4664 cni.go:84] Creating CNI manager for ""
	I0731 21:30:40.733976    4664 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:30:40.733976    4664 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:30:40.734704    4664 start.go:340] cluster config:
	{Name:addons-608900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-608900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:30:40.734704    4664 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:30:40.738655    4664 out.go:177] * Starting "addons-608900" primary control-plane node in "addons-608900" cluster
	I0731 21:30:40.742801    4664 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:30:40.742965    4664 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 21:30:40.742997    4664 cache.go:56] Caching tarball of preloaded images
	I0731 21:30:40.743340    4664 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 21:30:40.743340    4664 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 21:30:40.744009    4664 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\config.json ...
	I0731 21:30:40.744009    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\config.json: {Name:mk3a41366b2ad7a5a3538954ce7efe74e4b53d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:40.745264    4664 start.go:360] acquireMachinesLock for addons-608900: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:30:40.745264    4664 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-608900"
	I0731 21:30:40.745987    4664 start.go:93] Provisioning new machine with config: &{Name:addons-608900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:addons-608900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 21:30:40.746157    4664 start.go:125] createHost starting for "" (driver="hyperv")
	I0731 21:30:40.749905    4664 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 21:30:40.750033    4664 start.go:159] libmachine.API.Create for "addons-608900" (driver="hyperv")
	I0731 21:30:40.750033    4664 client.go:168] LocalClient.Create starting
	I0731 21:30:40.751041    4664 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 21:30:40.881673    4664 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 21:30:41.199216    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 21:30:43.105655    4664 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 21:30:43.105955    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:30:43.106023    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 21:30:44.750409    4664 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 21:30:44.750409    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:30:44.750409    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 21:30:46.137551    4664 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 21:30:46.137551    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:30:46.138655    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 21:30:49.633833    4664 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 21:30:49.634051    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:30:49.636560    4664 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 21:30:50.115894    4664 main.go:141] libmachine: Creating SSH key...
	I0731 21:30:50.885759    4664 main.go:141] libmachine: Creating VM...
	I0731 21:30:50.885759    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 21:30:53.546726    4664 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 21:30:53.547099    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:30:53.547162    4664 main.go:141] libmachine: Using switch "Default Switch"
	I0731 21:30:53.547162    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 21:30:55.203405    4664 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 21:30:55.203546    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:30:55.203546    4664 main.go:141] libmachine: Creating VHD
	I0731 21:30:55.203546    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 21:30:58.782632    4664 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8BD9B01E-8FFF-4985-984F-D4A2B4883CC8
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 21:30:58.782632    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:30:58.782753    4664 main.go:141] libmachine: Writing magic tar header
	I0731 21:30:58.782821    4664 main.go:141] libmachine: Writing SSH key tar header
	I0731 21:30:58.792921    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 21:31:01.810540    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:01.810944    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:01.811076    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\disk.vhd' -SizeBytes 20000MB
	I0731 21:31:04.239619    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:04.239619    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:04.240372    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-608900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0731 21:31:07.735998    4664 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-608900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 21:31:07.735998    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:07.736963    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-608900 -DynamicMemoryEnabled $false
	I0731 21:31:09.858227    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:09.858227    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:09.858979    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-608900 -Count 2
	I0731 21:31:11.918996    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:11.918996    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:11.919118    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-608900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\boot2docker.iso'
	I0731 21:31:14.348279    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:14.348279    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:14.348279    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-608900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\disk.vhd'
	I0731 21:31:16.834578    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:16.834578    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:16.834578    4664 main.go:141] libmachine: Starting VM...
	I0731 21:31:16.835621    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-608900
	I0731 21:31:19.890223    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:19.891249    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:19.891249    4664 main.go:141] libmachine: Waiting for host to start...
	I0731 21:31:19.891249    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:31:22.112509    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:31:22.113032    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:22.113156    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:31:24.503533    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:24.503533    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:25.518675    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:31:27.687866    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:31:27.687866    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:27.687866    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:31:30.105611    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:30.105611    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:31.115996    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:31:33.233968    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:31:33.234930    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:33.234930    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:31:35.641133    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:35.641833    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:36.652805    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:31:38.808148    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:31:38.808825    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:38.808908    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:31:41.258016    4664 main.go:141] libmachine: [stdout =====>] : 
	I0731 21:31:41.258716    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:42.269924    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:31:44.413756    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:31:44.413857    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:44.413978    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:31:46.842419    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:31:46.842419    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:46.842993    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:31:48.888134    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:31:48.888134    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:48.888818    4664 machine.go:94] provisionDockerMachine start ...
	I0731 21:31:48.888973    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:31:50.976346    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:31:50.976781    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:50.976781    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:31:53.459367    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:31:53.459367    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:53.465177    4664 main.go:141] libmachine: Using SSH client type: native
	I0731 21:31:53.476783    4664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.25.32 22 <nil> <nil>}
	I0731 21:31:53.476783    4664 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:31:53.597817    4664 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:31:53.597864    4664 buildroot.go:166] provisioning hostname "addons-608900"
	I0731 21:31:53.598009    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:31:55.649993    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:31:55.650851    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:55.650851    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:31:58.102054    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:31:58.103103    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:31:58.108461    4664 main.go:141] libmachine: Using SSH client type: native
	I0731 21:31:58.108638    4664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.25.32 22 <nil> <nil>}
	I0731 21:31:58.108638    4664 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-608900 && echo "addons-608900" | sudo tee /etc/hostname
	I0731 21:31:58.255950    4664 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-608900
	
	I0731 21:31:58.255950    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:00.352127    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:00.352127    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:00.352785    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:02.809173    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:02.809173    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:02.814931    4664 main.go:141] libmachine: Using SSH client type: native
	I0731 21:32:02.815612    4664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.25.32 22 <nil> <nil>}
	I0731 21:32:02.815612    4664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-608900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-608900/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-608900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:32:02.951691    4664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:32:02.951691    4664 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 21:32:02.951691    4664 buildroot.go:174] setting up certificates
	I0731 21:32:02.951691    4664 provision.go:84] configureAuth start
	I0731 21:32:02.951691    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:05.047742    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:05.047742    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:05.047742    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:07.446985    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:07.447527    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:07.447598    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:09.499953    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:09.499953    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:09.499953    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:11.893555    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:11.894318    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:11.894318    4664 provision.go:143] copyHostCerts
	I0731 21:32:11.895088    4664 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 21:32:11.896542    4664 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 21:32:11.897412    4664 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 21:32:11.899042    4664 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-608900 san=[127.0.0.1 172.17.25.32 addons-608900 localhost minikube]
	I0731 21:32:12.110432    4664 provision.go:177] copyRemoteCerts
	I0731 21:32:12.122436    4664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:32:12.122436    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:14.159272    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:14.159593    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:14.159593    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:16.525657    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:16.526345    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:16.526404    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:32:16.624942    4664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5024131s)
	I0731 21:32:16.625529    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 21:32:16.664848    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:32:16.708751    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:32:16.755539    4664 provision.go:87] duration metric: took 13.8035119s to configureAuth
	I0731 21:32:16.755629    4664 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:32:16.755732    4664 config.go:182] Loaded profile config "addons-608900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:32:16.755732    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:18.806729    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:18.806729    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:18.807805    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:21.231584    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:21.231725    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:21.235820    4664 main.go:141] libmachine: Using SSH client type: native
	I0731 21:32:21.236601    4664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.25.32 22 <nil> <nil>}
	I0731 21:32:21.236601    4664 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 21:32:21.370661    4664 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 21:32:21.370756    4664 buildroot.go:70] root file system type: tmpfs
	I0731 21:32:21.370879    4664 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 21:32:21.370879    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:23.407196    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:23.407196    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:23.407963    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:25.810638    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:25.810854    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:25.816029    4664 main.go:141] libmachine: Using SSH client type: native
	I0731 21:32:25.816218    4664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.25.32 22 <nil> <nil>}
	I0731 21:32:25.816218    4664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 21:32:25.973254    4664 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 21:32:25.973254    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:28.036379    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:28.037269    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:28.037269    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:30.459823    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:30.459875    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:30.465287    4664 main.go:141] libmachine: Using SSH client type: native
	I0731 21:32:30.465832    4664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.25.32 22 <nil> <nil>}
	I0731 21:32:30.465832    4664 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 21:32:32.647457    4664 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 21:32:32.647457    4664 machine.go:97] duration metric: took 43.7580836s to provisionDockerMachine
	I0731 21:32:32.647457    4664 client.go:171] duration metric: took 1m51.8960033s to LocalClient.Create
	I0731 21:32:32.647457    4664 start.go:167] duration metric: took 1m51.8960033s to libmachine.API.Create "addons-608900"
	I0731 21:32:32.647457    4664 start.go:293] postStartSetup for "addons-608900" (driver="hyperv")
	I0731 21:32:32.647457    4664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:32:32.661032    4664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:32:32.661032    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:34.720400    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:34.721250    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:34.721250    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:37.104430    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:37.105190    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:37.105574    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:32:37.206052    4664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5449624s)
	I0731 21:32:37.218460    4664 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:32:37.225429    4664 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:32:37.225429    4664 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 21:32:37.226145    4664 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 21:32:37.226145    4664 start.go:296] duration metric: took 4.5786298s for postStartSetup
	I0731 21:32:37.228996    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:39.282344    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:39.282344    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:39.283118    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:41.731921    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:41.732753    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:41.732920    4664 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\config.json ...
	I0731 21:32:41.735571    4664 start.go:128] duration metric: took 2m0.9878476s to createHost
	I0731 21:32:41.735571    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:43.753056    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:43.753298    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:43.753355    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:46.235226    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:46.235581    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:46.240539    4664 main.go:141] libmachine: Using SSH client type: native
	I0731 21:32:46.240696    4664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.25.32 22 <nil> <nil>}
	I0731 21:32:46.240696    4664 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:32:46.378186    4664 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461566.398589399
	
	I0731 21:32:46.378186    4664 fix.go:216] guest clock: 1722461566.398589399
	I0731 21:32:46.378186    4664 fix.go:229] Guest: 2024-07-31 21:32:46.398589399 +0000 UTC Remote: 2024-07-31 21:32:41.7355713 +0000 UTC m=+126.495295301 (delta=4.663018099s)
	I0731 21:32:46.378186    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:48.435806    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:48.436801    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:48.436801    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:50.937160    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:50.937310    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:50.941882    4664 main.go:141] libmachine: Using SSH client type: native
	I0731 21:32:50.942675    4664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.25.32 22 <nil> <nil>}
	I0731 21:32:50.942675    4664 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722461566
	I0731 21:32:51.089115    4664 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 21:32:46 UTC 2024
	
	I0731 21:32:51.089189    4664 fix.go:236] clock set: Wed Jul 31 21:32:46 UTC 2024
	 (err=<nil>)
	I0731 21:32:51.089189    4664 start.go:83] releasing machines lock for "addons-608900", held for 2m10.3422697s
	I0731 21:32:51.089609    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:53.135532    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:53.135532    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:53.135682    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:55.546334    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:32:55.547427    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:55.551617    4664 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 21:32:55.551617    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:55.562465    4664 ssh_runner.go:195] Run: cat /version.json
	I0731 21:32:55.562465    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:32:57.724347    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:57.724347    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:57.724347    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:32:57.724731    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:32:57.724347    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:32:57.724731    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:33:00.306937    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:33:00.306937    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:00.306937    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:33:00.333348    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:33:00.333419    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:00.333991    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:33:00.396332    4664 ssh_runner.go:235] Completed: cat /version.json: (4.8338049s)
	I0731 21:33:00.407654    4664 ssh_runner.go:195] Run: systemctl --version
	I0731 21:33:00.412757    4664 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8610782s)
	W0731 21:33:00.412866    4664 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 21:33:00.429584    4664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:33:00.439379    4664 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:33:00.451794    4664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:33:00.479415    4664 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:33:00.479489    4664 start.go:495] detecting cgroup driver to use...
	I0731 21:33:00.479489    4664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:33:00.526171    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0731 21:33:00.536427    4664 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 21:33:00.536427    4664 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 21:33:00.560891    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 21:33:00.578340    4664 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 21:33:00.588766    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 21:33:00.617435    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 21:33:00.650016    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 21:33:00.682258    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 21:33:00.710978    4664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:33:00.740480    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 21:33:00.771674    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 21:33:00.799251    4664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 21:33:00.828370    4664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:33:00.856926    4664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:33:00.883831    4664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:33:01.074806    4664 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 21:33:01.103273    4664 start.go:495] detecting cgroup driver to use...
	I0731 21:33:01.115779    4664 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 21:33:01.147981    4664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:33:01.178119    4664 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:33:01.214805    4664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:33:01.245853    4664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 21:33:01.276436    4664 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 21:33:01.337334    4664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 21:33:01.358527    4664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:33:01.397980    4664 ssh_runner.go:195] Run: which cri-dockerd
	I0731 21:33:01.413850    4664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 21:33:01.430103    4664 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 21:33:01.468828    4664 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 21:33:01.643521    4664 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 21:33:01.808242    4664 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 21:33:01.808242    4664 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 21:33:01.848790    4664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:33:02.009281    4664 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 21:33:04.551059    4664 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5416482s)
	I0731 21:33:04.561544    4664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 21:33:04.593544    4664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 21:33:04.627723    4664 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 21:33:04.822425    4664 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 21:33:05.009730    4664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:33:05.184900    4664 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 21:33:05.220955    4664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 21:33:05.260756    4664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:33:05.451020    4664 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 21:33:05.562016    4664 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 21:33:05.573020    4664 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 21:33:05.582033    4664 start.go:563] Will wait 60s for crictl version
	I0731 21:33:05.593016    4664 ssh_runner.go:195] Run: which crictl
	I0731 21:33:05.610519    4664 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:33:05.662416    4664 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 21:33:05.671302    4664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 21:33:05.709916    4664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 21:33:05.742886    4664 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 21:33:05.742886    4664 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 21:33:05.747299    4664 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 21:33:05.747299    4664 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 21:33:05.747299    4664 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 21:33:05.747299    4664 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 21:33:05.750053    4664 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 21:33:05.750053    4664 ip.go:210] interface addr: 172.17.16.1/20
	I0731 21:33:05.761835    4664 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 21:33:05.767486    4664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:33:05.786632    4664 kubeadm.go:883] updating cluster {Name:addons-608900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:addons-608900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.25.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:33:05.786801    4664 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:33:05.794782    4664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 21:33:05.815094    4664 docker.go:685] Got preloaded images: 
	I0731 21:33:05.815220    4664 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0731 21:33:05.825695    4664 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 21:33:05.852670    4664 ssh_runner.go:195] Run: which lz4
	I0731 21:33:05.869052    4664 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:33:05.874319    4664 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:33:05.874319    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0731 21:33:07.898476    4664 docker.go:649] duration metric: took 2.0410005s to copy over tarball
	I0731 21:33:07.909141    4664 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:33:13.092190    4664 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.1829831s)
	I0731 21:33:13.092190    4664 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:33:13.151747    4664 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 21:33:13.177530    4664 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0731 21:33:13.220122    4664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:33:13.395872    4664 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 21:33:19.204061    4664 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.8081144s)
	I0731 21:33:19.215482    4664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 21:33:19.246043    4664 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 21:33:19.246143    4664 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:33:19.246143    4664 kubeadm.go:934] updating node { 172.17.25.32 8443 v1.30.3 docker true true} ...
	I0731 21:33:19.246502    4664 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-608900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.25.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-608900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:33:19.255696    4664 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 21:33:19.322982    4664 cni.go:84] Creating CNI manager for ""
	I0731 21:33:19.322982    4664 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:33:19.322982    4664 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:33:19.322982    4664 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.25.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-608900 NodeName:addons-608900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.25.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.25.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:33:19.322982    4664 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.25.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-608900"
	  kubeletExtraArgs:
	    node-ip: 172.17.25.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.25.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:33:19.334489    4664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:33:19.353526    4664 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:33:19.365363    4664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:33:19.380654    4664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 21:33:19.410498    4664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:33:19.437958    4664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0731 21:33:19.477554    4664 ssh_runner.go:195] Run: grep 172.17.25.32	control-plane.minikube.internal$ /etc/hosts
	I0731 21:33:19.483525    4664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.25.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:33:19.516472    4664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:33:19.703384    4664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:33:19.734620    4664 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900 for IP: 172.17.25.32
	I0731 21:33:19.734620    4664 certs.go:194] generating shared ca certs ...
	I0731 21:33:19.734620    4664 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:19.735166    4664 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 21:33:20.005411    4664 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0731 21:33:20.005411    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:20.006908    4664 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0731 21:33:20.006908    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:20.007978    4664 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 21:33:20.312539    4664 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0731 21:33:20.312539    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:20.313070    4664 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0731 21:33:20.314065    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:20.314260    4664 certs.go:256] generating profile certs ...
	I0731 21:33:20.315351    4664 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.key
	I0731 21:33:20.315351    4664 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt with IP's: []
	I0731 21:33:20.545753    4664 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt ...
	I0731 21:33:20.545753    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: {Name:mk1b397a855a51dfb400a8a7363dd24ee4b8ba4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:20.547881    4664 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.key ...
	I0731 21:33:20.547881    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.key: {Name:mk6f3a65effe76e764f44efd17989f2c5f0aa31b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:20.548355    4664 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.key.1a176ed6
	I0731 21:33:20.549369    4664 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.crt.1a176ed6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.25.32]
	I0731 21:33:20.693171    4664 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.crt.1a176ed6 ...
	I0731 21:33:20.693171    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.crt.1a176ed6: {Name:mk27559e7e94c6a8c065d19a6995420562378822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:20.695276    4664 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.key.1a176ed6 ...
	I0731 21:33:20.695276    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.key.1a176ed6: {Name:mke9e4f4577cbe19cc7833fab452b1829033b221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:20.695552    4664 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.crt.1a176ed6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.crt
	I0731 21:33:20.707601    4664 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.key.1a176ed6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.key
	I0731 21:33:20.707923    4664 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\proxy-client.key
	I0731 21:33:20.707923    4664 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\proxy-client.crt with IP's: []
	I0731 21:33:21.186109    4664 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\proxy-client.crt ...
	I0731 21:33:21.186109    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\proxy-client.crt: {Name:mkbfe334e8c55ffab9bebe48cccfab4cd58d9fd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:21.187781    4664 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\proxy-client.key ...
	I0731 21:33:21.187781    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\proxy-client.key: {Name:mk97b83516b53c2dda69cee80d633b8ce68df064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:21.199175    4664 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 21:33:21.200158    4664 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 21:33:21.200358    4664 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 21:33:21.200621    4664 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 21:33:21.200869    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:33:21.240801    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:33:21.276747    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:33:21.324578    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:33:21.364597    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 21:33:21.409244    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:33:21.451900    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:33:21.496614    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:33:21.537242    4664 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:33:21.578451    4664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:33:21.619658    4664 ssh_runner.go:195] Run: openssl version
	I0731 21:33:21.639934    4664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:33:21.667907    4664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:33:21.675019    4664 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:33:21.685687    4664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:33:21.703237    4664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:33:21.732458    4664 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:33:21.739143    4664 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 21:33:21.739438    4664 kubeadm.go:392] StartCluster: {Name:addons-608900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:addons-608900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.25.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:33:21.748064    4664 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 21:33:21.779759    4664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:33:21.809285    4664 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:21.840133    4664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:21.855261    4664 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:21.855361    4664 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:21.865813    4664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:21.881475    4664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:21.892677    4664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:21.925522    4664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:21.939294    4664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:21.951707    4664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:21.980462    4664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:21.995764    4664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:22.006388    4664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:22.039299    4664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:22.067341    4664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:22.077343    4664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:22.094075    4664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:22.168517    4664 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:33:22.168909    4664 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:22.335524    4664 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:22.335750    4664 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:22.336008    4664 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:22.607166    4664 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:22.612565    4664 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:22.612565    4664 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:22.612565    4664 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:23.169017    4664 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 21:33:23.698678    4664 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 21:33:23.811039    4664 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 21:33:24.463016    4664 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 21:33:24.677708    4664 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 21:33:24.678417    4664 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-608900 localhost] and IPs [172.17.25.32 127.0.0.1 ::1]
	I0731 21:33:24.891231    4664 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 21:33:24.891231    4664 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-608900 localhost] and IPs [172.17.25.32 127.0.0.1 ::1]
	I0731 21:33:24.961766    4664 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 21:33:25.156427    4664 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 21:33:25.219062    4664 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 21:33:25.219569    4664 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:25.507427    4664 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:25.837911    4664 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:33:26.187584    4664 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:26.383042    4664 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:26.554272    4664 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:26.554460    4664 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:26.557938    4664 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:26.561127    4664 out.go:204]   - Booting up control plane ...
	I0731 21:33:26.561127    4664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:26.561127    4664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:26.563212    4664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:26.587938    4664 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:26.590598    4664 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:26.590598    4664 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:26.777794    4664 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:33:26.777794    4664 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:33:27.779045    4664 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001551961s
	I0731 21:33:27.779045    4664 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:33:34.281791    4664 kubeadm.go:310] [api-check] The API server is healthy after 6.503148118s
	I0731 21:33:34.308312    4664 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:33:34.343192    4664 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:33:34.409153    4664 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:33:34.409534    4664 kubeadm.go:310] [mark-control-plane] Marking the node addons-608900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:33:34.431173    4664 kubeadm.go:310] [bootstrap-token] Using token: 43i5zb.tb6p81ojmrtd6cec
	I0731 21:33:34.438147    4664 out.go:204]   - Configuring RBAC rules ...
	I0731 21:33:34.438147    4664 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:33:34.454135    4664 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:33:34.469919    4664 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:33:34.483456    4664 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:33:34.492262    4664 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:33:34.502307    4664 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:33:34.695119    4664 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:33:35.223727    4664 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:33:35.693230    4664 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:33:35.694452    4664 kubeadm.go:310] 
	I0731 21:33:35.695266    4664 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:33:35.695308    4664 kubeadm.go:310] 
	I0731 21:33:35.695414    4664 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:33:35.695414    4664 kubeadm.go:310] 
	I0731 21:33:35.695414    4664 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:33:35.695414    4664 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:33:35.695414    4664 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:33:35.695414    4664 kubeadm.go:310] 
	I0731 21:33:35.695940    4664 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:33:35.695940    4664 kubeadm.go:310] 
	I0731 21:33:35.696003    4664 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:33:35.696003    4664 kubeadm.go:310] 
	I0731 21:33:35.696003    4664 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:33:35.696003    4664 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:33:35.696003    4664 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:33:35.696573    4664 kubeadm.go:310] 
	I0731 21:33:35.696726    4664 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:33:35.696950    4664 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:33:35.696950    4664 kubeadm.go:310] 
	I0731 21:33:35.697124    4664 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 43i5zb.tb6p81ojmrtd6cec \
	I0731 21:33:35.697124    4664 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf \
	I0731 21:33:35.697124    4664 kubeadm.go:310] 	--control-plane 
	I0731 21:33:35.697124    4664 kubeadm.go:310] 
	I0731 21:33:35.697124    4664 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:33:35.697665    4664 kubeadm.go:310] 
	I0731 21:33:35.697829    4664 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 43i5zb.tb6p81ojmrtd6cec \
	I0731 21:33:35.698007    4664 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf 
	I0731 21:33:35.698007    4664 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:33:35.698007    4664 cni.go:84] Creating CNI manager for ""
	I0731 21:33:35.698007    4664 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:33:35.703732    4664 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:33:35.716939    4664 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:33:35.736654    4664 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:33:35.768029    4664 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:33:35.781155    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-608900 minikube.k8s.io/updated_at=2024_07_31T21_33_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=addons-608900 minikube.k8s.io/primary=true
	I0731 21:33:35.781155    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:35.786945    4664 ops.go:34] apiserver oom_adj: -16
	I0731 21:33:35.952220    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:36.458241    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:36.954911    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:37.455147    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:37.958385    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:38.455261    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:38.955782    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:39.458202    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:39.961053    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:40.463431    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:40.965146    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:41.463656    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:41.963850    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:42.465675    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:42.966123    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:43.455763    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:43.956938    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:44.454830    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:44.957637    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:45.458327    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:45.958441    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:46.460587    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:46.965497    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:47.467895    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:47.954443    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:48.465290    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:48.966472    4664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:33:49.118460    4664 kubeadm.go:1113] duration metric: took 13.3501922s to wait for elevateKubeSystemPrivileges
	I0731 21:33:49.119531    4664 kubeadm.go:394] duration metric: took 27.3797469s to StartCluster
	I0731 21:33:49.119531    4664 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:49.119531    4664 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:33:49.120455    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:33:49.121457    4664 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.25.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 21:33:49.121457    4664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 21:33:49.122505    4664 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 21:33:49.122505    4664 addons.go:69] Setting yakd=true in profile "addons-608900"
	I0731 21:33:49.122505    4664 addons.go:234] Setting addon yakd=true in "addons-608900"
	I0731 21:33:49.122505    4664 config.go:182] Loaded profile config "addons-608900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:33:49.122505    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.122505    4664 addons.go:69] Setting cloud-spanner=true in profile "addons-608900"
	I0731 21:33:49.122505    4664 addons.go:234] Setting addon cloud-spanner=true in "addons-608900"
	I0731 21:33:49.122505    4664 addons.go:69] Setting storage-provisioner=true in profile "addons-608900"
	I0731 21:33:49.122505    4664 addons.go:69] Setting gcp-auth=true in profile "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:69] Setting helm-tiller=true in profile "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:234] Setting addon helm-tiller=true in "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:69] Setting default-storageclass=true in profile "addons-608900"
	I0731 21:33:49.123526    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.123526    4664 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-608900"
	I0731 21:33:49.123526    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.123526    4664 addons.go:69] Setting ingress=true in profile "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:234] Setting addon ingress=true in "addons-608900"
	I0731 21:33:49.123526    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.123526    4664 addons.go:234] Setting addon storage-provisioner=true in "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-608900"
	I0731 21:33:49.123526    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.124470    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.123526    4664 mustload.go:65] Loading cluster: addons-608900
	I0731 21:33:49.123526    4664 addons.go:69] Setting metrics-server=true in profile "addons-608900"
	I0731 21:33:49.122505    4664 addons.go:69] Setting inspektor-gadget=true in profile "addons-608900"
	I0731 21:33:49.124470    4664 addons.go:234] Setting addon inspektor-gadget=true in "addons-608900"
	I0731 21:33:49.124470    4664 addons.go:234] Setting addon metrics-server=true in "addons-608900"
	I0731 21:33:49.124470    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.124470    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.123526    4664 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-608900"
	I0731 21:33:49.125593    4664 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:69] Setting volcano=true in profile "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-608900"
	I0731 21:33:49.126464    4664 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-608900"
	I0731 21:33:49.126464    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.127463    4664 out.go:177] * Verifying Kubernetes components...
	I0731 21:33:49.124470    4664 config.go:182] Loaded profile config "addons-608900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:33:49.123526    4664 addons.go:69] Setting ingress-dns=true in profile "addons-608900"
	I0731 21:33:49.127463    4664 addons.go:234] Setting addon ingress-dns=true in "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:69] Setting registry=true in profile "addons-608900"
	I0731 21:33:49.127463    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.127463    4664 addons.go:234] Setting addon registry=true in "addons-608900"
	I0731 21:33:49.126464    4664 addons.go:234] Setting addon volcano=true in "addons-608900"
	I0731 21:33:49.123526    4664 addons.go:69] Setting volumesnapshots=true in profile "addons-608900"
	I0731 21:33:49.128473    4664 addons.go:234] Setting addon volumesnapshots=true in "addons-608900"
	I0731 21:33:49.128473    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.129478    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.129478    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:49.133449    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.133449    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.134460    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.134460    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.134460    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.135448    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.135448    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.136473    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.136473    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.136473    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.136473    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.136473    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.137471    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.137471    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.138462    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.137471    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:49.164743    4664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:33:50.361690    4664 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.1958167s)
	I0731 21:33:50.379692    4664 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.2582186s)
	I0731 21:33:50.379692    4664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 21:33:50.387693    4664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:33:51.646874    4664 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.2591656s)
	I0731 21:33:51.652877    4664 node_ready.go:35] waiting up to 6m0s for node "addons-608900" to be "Ready" ...
	I0731 21:33:51.653876    4664 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.274168s)
	I0731 21:33:51.653876    4664 start.go:971] {"host.minikube.internal": 172.17.16.1} host record injected into CoreDNS's ConfigMap
	I0731 21:33:52.341792    4664 node_ready.go:49] node "addons-608900" has status "Ready":"True"
	I0731 21:33:52.341792    4664 node_ready.go:38] duration metric: took 688.9058ms for node "addons-608900" to be "Ready" ...
	I0731 21:33:52.341792    4664 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:33:52.713834    4664 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bqpjs" in "kube-system" namespace to be "Ready" ...
	I0731 21:33:53.053850    4664 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-608900" context rescaled to 1 replicas
	I0731 21:33:54.824375    4664 pod_ready.go:102] pod "coredns-7db6d8ff4d-bqpjs" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.486218    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:55.486218    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:55.510223    4664 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-608900"
	I0731 21:33:55.510223    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:55.512230    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:55.916052    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:55.916052    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:55.924074    4664 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 21:33:55.930070    4664 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 21:33:55.930070    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 21:33:55.930070    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:55.969930    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:55.969930    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:55.969930    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:55.969930    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:55.971933    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:55.984926    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:55.977932    4664 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 21:33:55.990935    4664 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 21:33:55.999925    4664 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 21:33:55.999925    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 21:33:55.999925    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.007939    4664 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 21:33:56.011272    4664 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:33:56.011272    4664 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:33:56.011272    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.017130    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.017178    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.027290    4664 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 21:33:56.039767    4664 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 21:33:56.043372    4664 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 21:33:56.045077    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 21:33:56.045077    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.051611    4664 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 21:33:56.065144    4664 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 21:33:56.077147    4664 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 21:33:56.077147    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 21:33:56.077147    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.196200    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.196200    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.200201    4664 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 21:33:56.204194    4664 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 21:33:56.204194    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 21:33:56.204194    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.213188    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.213188    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.216198    4664 addons.go:234] Setting addon default-storageclass=true in "addons-608900"
	I0731 21:33:56.216198    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:56.219200    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.536198    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.536198    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.559316    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 21:33:56.598685    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 21:33:56.609399    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.609399    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.610397    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 21:33:56.615075    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 21:33:56.627221    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.630270    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.631600    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.633332    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.633332    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:33:56.631832    4664 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 21:33:56.634328    4664 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 21:33:56.634328    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.637325    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 21:33:56.641331    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 21:33:56.645342    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 21:33:56.653114    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 21:33:56.657102    4664 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 21:33:56.660104    4664 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 21:33:56.660104    4664 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 21:33:56.660104    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.665097    4664 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 21:33:56.676106    4664 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 21:33:56.676106    4664 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 21:33:56.676106    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.702039    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.702112    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.709854    4664 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:33:56.716337    4664 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:33:56.716337    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:33:56.716337    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:56.823929    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.823929    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.838777    4664 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0731 21:33:56.842988    4664 pod_ready.go:102] pod "coredns-7db6d8ff4d-bqpjs" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:56.856305    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:56.856305    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:56.879012    4664 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0731 21:33:56.929803    4664 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 21:33:56.956295    4664 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0731 21:33:57.065137    4664 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0731 21:33:57.065137    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0731 21:33:57.065137    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:57.045151    4664 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 21:33:57.326747    4664 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 21:33:57.327747    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:57.388750    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:33:57.390749    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:33:59.427808    4664 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 21:33:59.488813    4664 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 21:33:59.488813    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 21:33:59.488813    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:33:59.584534    4664 pod_ready.go:102] pod "coredns-7db6d8ff4d-bqpjs" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:01.825503    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:01.825503    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:01.830514    4664 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 21:34:01.837497    4664 out.go:177]   - Using image docker.io/busybox:stable
	I0731 21:34:01.840485    4664 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 21:34:01.840485    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 21:34:01.840485    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:34:01.907972    4664 pod_ready.go:102] pod "coredns-7db6d8ff4d-bqpjs" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:02.323448    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:02.323448    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:02.323448    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:02.347380    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:02.347908    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:02.349395    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:02.502816    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:02.502816    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:02.502816    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:02.659082    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:02.659082    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:02.659082    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:02.687622    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:02.687622    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:02.687622    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:02.988090    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:02.988090    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:02.988090    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:03.010613    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:03.010613    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:03.011761    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:03.033510    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:03.033510    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:03.033510    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:03.178275    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:03.178275    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:03.178275    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:03.249873    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:03.249873    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:03.249873    4664 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:03.249873    4664 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:34:03.249873    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:34:03.305266    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:03.305266    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:03.305266    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:03.762952    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:03.762952    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:03.762952    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:04.243797    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:04.243797    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:04.244794    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:04.419518    4664 pod_ready.go:102] pod "coredns-7db6d8ff4d-bqpjs" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.796509    4664 pod_ready.go:92] pod "coredns-7db6d8ff4d-bqpjs" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:04.796509    4664 pod_ready.go:81] duration metric: took 12.0825228s for pod "coredns-7db6d8ff4d-bqpjs" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:04.796509    4664 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ptk8d" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:04.805006    4664 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 21:34:04.805006    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:34:05.145905    4664 pod_ready.go:92] pod "coredns-7db6d8ff4d-ptk8d" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:05.145905    4664 pod_ready.go:81] duration metric: took 349.3913ms for pod "coredns-7db6d8ff4d-ptk8d" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:05.145905    4664 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-608900" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:05.734166    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:05.734166    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:05.734166    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:05.808794    4664 pod_ready.go:92] pod "etcd-addons-608900" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:05.808794    4664 pod_ready.go:81] duration metric: took 662.8814ms for pod "etcd-addons-608900" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:05.808794    4664 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-608900" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:05.841343    4664 pod_ready.go:92] pod "kube-apiserver-addons-608900" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:05.841531    4664 pod_ready.go:81] duration metric: took 32.632ms for pod "kube-apiserver-addons-608900" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:05.841531    4664 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-608900" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:05.888577    4664 pod_ready.go:92] pod "kube-controller-manager-addons-608900" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:05.888577    4664 pod_ready.go:81] duration metric: took 47.0452ms for pod "kube-controller-manager-addons-608900" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:05.888577    4664 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n29jx" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:06.115563    4664 pod_ready.go:92] pod "kube-proxy-n29jx" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:06.115563    4664 pod_ready.go:81] duration metric: took 226.9833ms for pod "kube-proxy-n29jx" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:06.115563    4664 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-608900" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:06.156586    4664 pod_ready.go:92] pod "kube-scheduler-addons-608900" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:06.156586    4664 pod_ready.go:81] duration metric: took 41.023ms for pod "kube-scheduler-addons-608900" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:06.156586    4664 pod_ready.go:38] duration metric: took 13.8146204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:06.156586    4664 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:06.185129    4664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:06.303364    4664 api_server.go:72] duration metric: took 17.1806894s to wait for apiserver process to appear ...
	I0731 21:34:06.303364    4664 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:06.303364    4664 api_server.go:253] Checking apiserver healthz at https://172.17.25.32:8443/healthz ...
	I0731 21:34:06.476369    4664 api_server.go:279] https://172.17.25.32:8443/healthz returned 200:
	ok
	I0731 21:34:06.489710    4664 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:06.489710    4664 api_server.go:131] duration metric: took 186.3443ms to wait for apiserver health ...
	I0731 21:34:06.489710    4664 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:06.503307    4664 system_pods.go:59] 7 kube-system pods found
	I0731 21:34:06.503405    4664 system_pods.go:61] "coredns-7db6d8ff4d-bqpjs" [a8a202cd-e3df-4c14-993e-491326f00667] Running
	I0731 21:34:06.503472    4664 system_pods.go:61] "coredns-7db6d8ff4d-ptk8d" [bebf3ace-34b0-45c2-ace0-a46c3ddcf730] Running
	I0731 21:34:06.503472    4664 system_pods.go:61] "etcd-addons-608900" [3f5f3b1c-25f4-4484-9ca4-3fe79693d727] Running
	I0731 21:34:06.503472    4664 system_pods.go:61] "kube-apiserver-addons-608900" [2a343344-598a-47f2-b877-2cfeb18f536f] Running
	I0731 21:34:06.503472    4664 system_pods.go:61] "kube-controller-manager-addons-608900" [0a75b1ce-4a8e-4840-82d1-c5fa636a3850] Running
	I0731 21:34:06.503531    4664 system_pods.go:61] "kube-proxy-n29jx" [59e9077d-4789-434a-b9d3-ca056941d68c] Running
	I0731 21:34:06.503531    4664 system_pods.go:61] "kube-scheduler-addons-608900" [62b362ba-cbb8-4db1-9936-3524d8071ca2] Running
	I0731 21:34:06.503531    4664 system_pods.go:74] duration metric: took 13.8201ms to wait for pod list to return data ...
	I0731 21:34:06.503531    4664 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:06.512193    4664 default_sa.go:45] found service account: "default"
	I0731 21:34:06.512193    4664 default_sa.go:55] duration metric: took 8.6626ms for default service account to be created ...
	I0731 21:34:06.512193    4664 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:06.534300    4664 system_pods.go:86] 7 kube-system pods found
	I0731 21:34:06.534300    4664 system_pods.go:89] "coredns-7db6d8ff4d-bqpjs" [a8a202cd-e3df-4c14-993e-491326f00667] Running
	I0731 21:34:06.534300    4664 system_pods.go:89] "coredns-7db6d8ff4d-ptk8d" [bebf3ace-34b0-45c2-ace0-a46c3ddcf730] Running
	I0731 21:34:06.534300    4664 system_pods.go:89] "etcd-addons-608900" [3f5f3b1c-25f4-4484-9ca4-3fe79693d727] Running
	I0731 21:34:06.534300    4664 system_pods.go:89] "kube-apiserver-addons-608900" [2a343344-598a-47f2-b877-2cfeb18f536f] Running
	I0731 21:34:06.534300    4664 system_pods.go:89] "kube-controller-manager-addons-608900" [0a75b1ce-4a8e-4840-82d1-c5fa636a3850] Running
	I0731 21:34:06.534300    4664 system_pods.go:89] "kube-proxy-n29jx" [59e9077d-4789-434a-b9d3-ca056941d68c] Running
	I0731 21:34:06.534300    4664 system_pods.go:89] "kube-scheduler-addons-608900" [62b362ba-cbb8-4db1-9936-3524d8071ca2] Running
	I0731 21:34:06.534300    4664 system_pods.go:126] duration metric: took 22.1065ms to wait for k8s-apps to be running ...
	I0731 21:34:06.534300    4664 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:06.556303    4664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:06.630536    4664 system_svc.go:56] duration metric: took 96.2346ms WaitForService to wait for kubelet
	I0731 21:34:06.630867    4664 kubeadm.go:582] duration metric: took 17.5091893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:06.630867    4664 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:06.669008    4664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:06.669008    4664 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:06.669008    4664 node_conditions.go:105] duration metric: took 38.1404ms to run NodePressure ...
	I0731 21:34:06.669008    4664 start.go:241] waiting for startup goroutines ...
	I0731 21:34:08.152586    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:08.152586    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:08.152586    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:10.180407    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:10.180407    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.180951    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:10.248577    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.248577    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.249575    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:10.366033    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.366033    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.366765    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:10.446858    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.446858    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.447845    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:10.617275    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.617346    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.617972    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:10.651183    4664 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:34:10.651183    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 21:34:10.742823    4664 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 21:34:10.742823    4664 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 21:34:10.744838    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.744838    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.745828    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:10.765823    4664 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:34:10.765823    4664 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:34:10.811651    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.811651    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.812052    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:10.864229    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.864229    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.865126    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:10.900892    4664 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 21:34:10.900892    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 21:34:10.931718    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.931777    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.932083    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:10.951693    4664 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 21:34:10.951693    4664 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 21:34:10.998540    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:10.998540    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:10.999948    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:11.006506    4664 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:11.006635    4664 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:34:11.081470    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:11.081470    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:11.082469    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:11.157487    4664 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 21:34:11.157487    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:11.157487    4664 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 21:34:11.157487    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:11.158468    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:11.160465    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 21:34:11.162471    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:11.172470    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:11.172470    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:11.172470    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:11.259102    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:11.273100    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 21:34:11.319193    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 21:34:11.330596    4664 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 21:34:11.330596    4664 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 21:34:11.393528    4664 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 21:34:11.393590    4664 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 21:34:11.434781    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 21:34:11.531905    4664 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 21:34:11.532025    4664 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 21:34:11.722400    4664 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 21:34:11.722400    4664 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 21:34:11.745027    4664 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 21:34:11.745126    4664 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 21:34:11.794055    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:11.794055    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:11.794055    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:11.886820    4664 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 21:34:11.886820    4664 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 21:34:11.895223    4664 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 21:34:11.895291    4664 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 21:34:11.926360    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0731 21:34:11.932437    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:11.932437    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:11.933103    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:11.980501    4664 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 21:34:11.980580    4664 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 21:34:12.005333    4664 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 21:34:12.005333    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 21:34:12.077469    4664 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 21:34:12.077557    4664 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 21:34:12.091276    4664 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 21:34:12.091276    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 21:34:12.257393    4664 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 21:34:12.257461    4664 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 21:34:12.262854    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 21:34:12.376567    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 21:34:12.445022    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 21:34:12.463986    4664 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 21:34:12.463986    4664 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 21:34:12.569667    4664 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 21:34:12.569667    4664 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 21:34:12.580894    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:12.580970    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:12.581780    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:12.710371    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 21:34:12.778074    4664 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 21:34:12.778074    4664 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 21:34:12.857330    4664 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 21:34:12.857429    4664 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 21:34:13.122902    4664 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 21:34:13.122962    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 21:34:13.168338    4664 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 21:34:13.168338    4664 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 21:34:13.330676    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:13.330740    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:13.331847    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:13.349760    4664 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 21:34:13.349760    4664 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 21:34:13.424381    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 21:34:13.500307    4664 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 21:34:13.500307    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 21:34:13.516933    4664 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 21:34:13.517003    4664 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 21:34:13.994369    4664 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 21:34:13.994446    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 21:34:14.016055    4664 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 21:34:14.016110    4664 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 21:34:14.102779    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:14.102779    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:14.103289    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:14.447092    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.530171    4664 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 21:34:14.530171    4664 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 21:34:14.583248    4664 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 21:34:14.583302    4664 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 21:34:14.934951    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.7744387s)
	I0731 21:34:14.935073    4664 addons.go:475] Verifying addon registry=true in "addons-608900"
	I0731 21:34:14.939340    4664 out.go:177] * Verifying registry addon...
	I0731 21:34:14.945409    4664 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 21:34:14.949728    4664 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 21:34:14.949728    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 21:34:14.955953    4664 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 21:34:14.955953    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:15.167325    4664 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 21:34:15.206781    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 21:34:15.278652    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.1161296s)
	I0731 21:34:15.467005    4664 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 21:34:15.467005    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:15.798662    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 21:34:15.984881    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:16.407165    4664 addons.go:234] Setting addon gcp-auth=true in "addons-608900"
	I0731 21:34:16.407165    4664 host.go:66] Checking if "addons-608900" exists ...
	I0731 21:34:16.408780    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:34:16.468150    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:17.070132    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:17.425590    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.1524128s)
	I0731 21:34:17.425662    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.1063918s)
	I0731 21:34:17.427060    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.1678796s)
	I0731 21:34:17.427060    4664 addons.go:475] Verifying addon metrics-server=true in "addons-608900"
	I0731 21:34:17.717548    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:17.980287    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:18.458146    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:18.962928    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:19.006455    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:19.006455    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:19.015455    4664 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 21:34:19.016454    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-608900 ).state
	I0731 21:34:19.545884    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:19.960389    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:20.526532    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:20.967745    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:21.499362    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:21.503979    4664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:34:21.503979    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:21.503979    4664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-608900 ).networkadapters[0]).ipaddresses[0]
	I0731 21:34:22.033995    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:22.467982    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:22.960099    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:23.455657    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:23.998785    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:24.208922    4664 main.go:141] libmachine: [stdout =====>] : 172.17.25.32
	
	I0731 21:34:24.208922    4664 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:34:24.210207    4664 sshutil.go:53] new ssh client: &{IP:172.17.25.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-608900\id_rsa Username:docker}
	I0731 21:34:24.275753    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.8407198s)
	I0731 21:34:24.275818    4664 addons.go:475] Verifying addon ingress=true in "addons-608900"
	I0731 21:34:24.279692    4664 out.go:177] * Verifying ingress addon...
	I0731 21:34:24.286959    4664 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 21:34:24.305499    4664 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 21:34:24.305553    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:24.454760    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:24.799657    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:24.958616    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:25.357819    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:25.510454    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:25.835383    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:25.964962    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:26.405474    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:26.562089    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:26.815542    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:26.985562    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:27.333331    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:27.463815    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:27.823253    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:27.983961    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (15.7208247s)
	I0731 21:34:27.984035    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (15.6072708s)
	W0731 21:34:27.984103    4664 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 21:34:27.984183    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (15.5388848s)
	I0731 21:34:27.984183    4664 retry.go:31] will retry after 286.029854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 21:34:27.984251    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (15.2736197s)
	I0731 21:34:27.984314    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.5596865s)
	I0731 21:34:27.984314    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.5370512s)
	I0731 21:34:27.987061    4664 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-608900 service yakd-dashboard -n yakd-dashboard
	
	I0731 21:34:27.990697    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (16.0641342s)
	I0731 21:34:27.999322    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0731 21:34:28.054689    4664 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 21:34:28.283424    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 21:34:28.403192    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:28.497631    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:28.820159    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:28.978025    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:29.152891    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (13.9459351s)
	I0731 21:34:29.152891    4664 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-608900"
	I0731 21:34:29.152891    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (13.3540608s)
	I0731 21:34:29.152891    4664 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.1373086s)
	I0731 21:34:29.160518    4664 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 21:34:29.162823    4664 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 21:34:29.167828    4664 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 21:34:29.170953    4664 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 21:34:29.175711    4664 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 21:34:29.175903    4664 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 21:34:29.225646    4664 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 21:34:29.225646    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:29.321042    4664 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 21:34:29.321111    4664 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 21:34:29.362789    4664 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 21:34:29.362872    4664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 21:34:29.448082    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:29.457374    4664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 21:34:29.466501    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:29.743718    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:29.804444    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:29.971038    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:30.187665    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:30.294387    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:30.468324    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:30.691974    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:30.799692    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:30.921458    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.6379464s)
	I0731 21:34:30.988851    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:31.178866    4664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.7214711s)
	I0731 21:34:31.186403    4664 addons.go:475] Verifying addon gcp-auth=true in "addons-608900"
	I0731 21:34:31.192635    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:31.192635    4664 out.go:177] * Verifying gcp-auth addon...
	I0731 21:34:31.197069    4664 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 21:34:31.216598    4664 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 21:34:31.299636    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:31.460681    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:31.681659    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:31.792548    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:31.969013    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:32.188985    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:32.294510    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:32.455624    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:32.675060    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:32.800739    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:32.957802    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:33.180402    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:33.304073    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:33.461479    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:33.685230    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:33.794818    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:33.952805    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:34.174345    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:34.299843    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:34.458998    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:34.681309    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:34.802917    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:34.961747    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:35.183211    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:35.292754    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:35.467603    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:35.677217    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:35.799877    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:35.956052    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:36.177964    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:36.301435    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:36.459379    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:36.679026    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:36.803134    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:37.012187    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:37.205542    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:37.307109    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:37.463198    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:37.684929    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:37.802826    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:37.960810    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:38.186461    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:38.309076    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:38.451790    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:38.692384    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:38.796408    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:38.957757    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:39.178015    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:39.298772    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:39.459830    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:39.682445    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:39.807193    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:39.964014    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:40.605402    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:40.605651    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:40.606386    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:40.683631    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:40.808836    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:40.965279    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:41.215904    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:41.364670    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:41.715460    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:41.716146    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:41.805824    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:42.021604    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:42.185092    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:42.309319    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:42.482345    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:42.690465    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:42.796631    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:42.956123    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:43.181665    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:43.303973    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:43.462123    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:43.685858    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:43.794315    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:43.954096    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:44.473516    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:44.476572    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:44.479738    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:44.690510    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:44.805073    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:44.957409    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:45.177726    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:45.308883    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:45.460713    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:45.681234    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:45.806893    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:45.965022    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:46.189677    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:46.294995    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:46.454174    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:46.677958    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:46.817326    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:46.969056    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:47.188856    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:47.298184    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:47.453998    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:47.690862    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:47.799056    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:47.958297    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:48.181702    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:48.307108    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:48.466371    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:48.692230    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:48.798047    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:48.954847    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:49.178250    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:49.301505    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:49.458077    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:49.682316    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:49.805668    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:49.963491    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:50.189169    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:50.299363    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:50.455337    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:50.681185    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:50.803543    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:50.961059    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:51.183463    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:51.307587    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:51.467184    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:52.327840    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:52.336310    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:52.340176    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:52.350357    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:52.352832    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:52.469425    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:52.690289    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:52.795519    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:52.967655    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:53.187181    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:53.296607    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:53.461861    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:53.695511    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:53.808739    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:53.959306    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:54.180660    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:54.302331    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:54.464234    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:54.689569    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:54.806616    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:54.955655    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:55.176409    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:55.301263    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:55.464506    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:55.684000    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:55.807701    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:55.968685    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:56.187419    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:56.294144    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:56.454809    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:56.681117    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:56.806930    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:56.964591    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:57.184853    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:57.292823    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:57.451845    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:57.677228    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:57.801553    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:57.962632    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:58.186442    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:58.295023    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:58.467195    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:58.689168    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:58.796039    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:58.953525    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:59.175594    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:59.302279    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:59.460581    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:34:59.682968    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:34:59.807474    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:34:59.970036    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:00.189324    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:00.296206    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:00.453745    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:00.688080    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:00.794244    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:00.954661    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:01.192954    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:01.299329    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:01.457498    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:01.716834    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:01.802721    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:01.960398    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:02.184866    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:02.310599    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:02.465618    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:02.691163    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:02.796409    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:02.957901    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:03.181771    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:03.298635    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:03.457389    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:03.678569    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:03.800167    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:03.957605    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:04.192445    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:04.296001    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:04.810942    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:04.813896    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:04.814371    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:04.968388    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:05.184330    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:05.310770    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:05.467070    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:05.696811    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:05.811649    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:05.966542    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:06.190908    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:06.296485    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:06.456996    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:35:06.687483    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:06.803085    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:06.965265    4664 kapi.go:107] duration metric: took 52.0192006s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 21:35:07.188074    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:07.300045    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:07.684943    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:07.803389    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:08.182525    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:08.305201    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:08.684675    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:08.805131    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:09.192299    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:09.294620    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:09.676040    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:09.798033    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:10.217764    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:10.302802    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:10.684381    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:10.802010    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:11.189481    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:11.303345    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:11.693783    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:11.802653    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:12.182016    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:12.301585    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:12.684869    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:12.807808    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:13.197951    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:13.297464    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:13.680934    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:13.802586    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:14.186707    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:14.298108    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:14.690910    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:14.798634    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:15.179352    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:15.302851    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:15.684503    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:15.805396    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:16.189240    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:16.296651    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:16.679115    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:16.804899    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:17.189439    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:17.298348    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:17.679759    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:17.803625    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:18.183644    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:18.294749    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:18.696710    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:18.798255    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:19.176419    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:19.302716    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:19.680063    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:19.805206    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:20.188914    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:20.296154    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:20.688448    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:20.796808    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:21.177757    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:21.299207    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:21.678130    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:21.802249    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:22.182770    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:22.311434    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:22.692425    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:22.800400    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:23.179772    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:23.304365    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:23.687466    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:23.794965    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:24.178054    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:24.301110    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:24.682093    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:24.808480    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:25.175961    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:25.300821    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:25.683505    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:25.804824    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:26.909506    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:26.916185    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:26.925364    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:26.925364    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:27.184575    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:27.305836    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:27.686130    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:27.793095    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:28.188919    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:28.295592    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:28.676599    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:28.800957    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:29.179509    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:29.318032    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:29.685134    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:29.793140    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:30.184343    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:30.301795    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:30.682112    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:30.802800    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:31.280363    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:31.304122    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:31.687934    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:31.796671    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:32.185935    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:32.306632    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:32.686403    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:32.796654    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:33.190264    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:33.297396    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:33.677605    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:33.800012    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:34.185311    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:34.293701    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:34.777517    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:34.961634    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:35.269495    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:35.294535    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:35.677867    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:35.800672    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:36.224258    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:36.303538    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:36.689171    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:36.808346    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:37.261461    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:37.297040    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:37.678648    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:37.799895    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:38.185729    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:38.305228    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:38.690230    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:38.798305    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:39.184287    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:39.305895    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:39.684631    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:39.793797    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:40.177474    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:40.300679    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:40.683009    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:40.805892    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:41.187797    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:41.295701    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:41.679219    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:41.800859    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:42.180903    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:42.304595    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:42.683545    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:42.804359    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:43.187657    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:43.294541    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:43.688153    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:43.794441    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:44.310196    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:44.311367    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:44.716471    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:44.810114    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:45.187842    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:45.306679    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:45.688547    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:45.799242    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:46.179668    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:46.303387    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:46.681514    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:46.803476    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:47.186848    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:47.294529    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:47.692444    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:47.806889    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:48.178902    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:48.302607    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:48.679064    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:48.800269    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:49.180847    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:49.310568    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:49.781400    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:49.808186    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:50.189121    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:50.298648    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:50.680847    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:50.806328    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:51.188057    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:51.296907    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:51.691397    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:51.800963    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:52.183204    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:52.306672    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:52.687595    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:52.796600    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:53.282584    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:53.298378    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:53.678426    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:53.801595    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:54.187361    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:54.308515    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:54.692756    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:54.796575    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:55.416487    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:55.419754    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:55.755517    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:55.810667    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:56.178699    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:56.300744    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:56.685502    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:56.807494    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:57.191835    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:57.299538    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:57.683485    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:57.807886    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:58.196120    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:58.297401    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:58.682228    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:58.806641    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:59.189638    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:59.296198    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:35:59.678434    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:35:59.801357    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:00.204710    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:00.732213    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:00.732336    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:00.806941    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:01.186043    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:01.295787    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:01.691660    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:01.798743    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:02.183413    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:02.305570    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:02.689346    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:02.797614    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:03.179329    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:03.306729    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:03.679440    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:03.803689    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:04.183229    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:04.306554    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:04.694912    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:04.799527    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:05.182182    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:05.613945    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:05.683866    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:05.918524    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:06.184810    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:06.307108    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:06.688387    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:06.796611    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:07.180114    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:07.314683    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:07.686984    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:07.793329    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:08.375594    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:08.375594    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:08.718896    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:08.811883    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:09.195727    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:09.299258    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:09.680849    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:09.800015    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:10.185926    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:10.307929    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:10.688044    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:10.797816    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:11.178646    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:11.301585    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:11.683419    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:12.073983    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:12.226248    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:12.307978    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:12.684718    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:12.803833    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:13.184081    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:13.309752    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:13.698859    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:13.797700    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:14.222611    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:14.311598    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:14.688363    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:14.796444    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:15.178648    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:15.304399    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:15.721814    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:15.915530    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:16.188708    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:16.298446    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:16.679113    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:16.799992    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:17.184869    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:17.313121    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:17.686775    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:17.796128    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:18.178681    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:18.302927    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:18.686891    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:18.795361    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:19.179996    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:19.304181    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:19.706894    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:19.804946    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:20.182030    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:20.308512    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:20.688861    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:20.796978    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:21.189577    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:21.298669    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:21.677733    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:21.804089    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:22.187879    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:22.474009    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:22.694634    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:22.801154    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:23.191965    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:23.301236    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:23.695703    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:23.807135    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:24.177377    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:24.301736    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:24.682615    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:24.806253    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:25.191150    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:25.297836    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:25.867360    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:25.868075    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:26.232354    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:26.302584    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:26.681239    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:26.807999    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:27.190075    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:27.312072    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:27.677017    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:27.801965    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:28.187990    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:28.307764    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:28.693672    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:28.799072    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:29.303245    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:29.306488    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:29.694998    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:29.807230    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:30.194972    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:30.305251    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:30.682535    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:30.808875    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:31.191574    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:31.299189    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:31.683219    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:31.807934    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:32.177016    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:32.301256    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:32.681544    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:32.803261    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:33.187768    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:33.295368    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:33.678836    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:33.807883    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:34.179309    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:34.300248    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:34.684617    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:34.806312    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:35.193365    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:35.297744    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:35.679161    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:35.801604    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:36.185143    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:36.307472    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:36.689406    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:36.797527    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:37.184498    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:37.306083    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:37.685768    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:37.806090    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:38.189616    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:38.303914    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:38.695174    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:38.808748    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:39.204312    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:39.295965    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:39.697182    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:39.801043    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:40.180541    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:40.304467    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:40.683757    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:40.817125    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:41.191065    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:41.296718    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:41.689758    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:41.800676    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:42.192081    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:42.299146    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:42.683888    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:42.804104    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:43.188958    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:43.295942    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:43.741185    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:43.800147    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:44.188692    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:44.295448    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:44.690331    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:44.797362    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:45.183269    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:45.300689    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:45.685073    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:45.810183    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:46.184724    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:46.308055    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:46.689282    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:46.799549    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:47.185171    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:47.306286    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:47.692826    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:47.797854    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:48.183653    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:48.314786    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:48.691237    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:48.802685    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:49.238884    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:49.407348    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:49.757449    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:49.977781    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:50.181987    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:50.305711    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:50.688154    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:50.810416    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:51.191129    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:51.298548    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:51.681084    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:51.803385    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:52.184734    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:52.307418    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:52.692451    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:52.799094    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:53.180781    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:53.305727    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:53.686245    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:53.812520    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:54.187704    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:54.322266    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:54.691276    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:54.799992    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:55.181259    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:55.302395    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:55.683548    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:55.806974    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:56.189254    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:56.295400    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:56.679123    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:56.809270    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:57.188422    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:57.296392    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:57.695478    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:57.810281    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:58.317354    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:58.321654    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:58.714327    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:58.798596    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:59.423301    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:36:59.426293    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:59.691910    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:36:59.804569    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:37:00.189229    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:37:00.303063    4664 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:37:00.706499    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:37:00.809493    4664 kapi.go:107] duration metric: took 2m36.5205609s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 21:37:01.192294    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:37:01.679421    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:37:02.187886    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:37:02.690575    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:37:03.180209    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:37:03.689545    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:37:04.189406    4664 kapi.go:107] duration metric: took 2m35.0196249s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 21:37:15.210414    4664 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 21:37:15.210414    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:15.710827    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:16.213745    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:16.711533    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:17.212275    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:17.711675    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:18.212100    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:18.710593    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:19.214911    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:19.715114    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:20.213211    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:20.709952    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:21.209578    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:21.708372    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:22.208852    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:22.711661    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:23.211337    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:23.710370    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:24.211696    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:24.709178    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:25.214290    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:25.713977    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:26.213349    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:26.715407    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:27.217375    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:27.718090    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:28.205493    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:28.711560    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:29.215555    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:29.716675    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:30.220838    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:30.719764    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:31.215467    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:31.719874    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:32.220420    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:32.713957    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:33.216840    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:33.711220    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:34.211270    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:34.716926    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:35.209652    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:35.720008    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:36.212118    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:36.715261    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:37.216354    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:37.717737    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:38.220872    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:38.709564    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:39.214156    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:39.714851    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:40.214747    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:40.715766    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:41.212770    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:41.713585    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:42.209592    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:42.710570    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:43.211244    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:43.709548    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:44.208760    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:44.709564    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:45.211327    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:45.714515    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:46.214278    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:46.713405    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:47.213197    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:47.715126    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:48.213324    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:48.719451    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:49.206015    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:49.722896    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:50.221723    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:50.707089    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:51.216174    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:51.750550    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:52.206204    4664 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:37:52.705096    4664 kapi.go:107] duration metric: took 3m21.5054401s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 21:37:52.707906    4664 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-608900 cluster.
	I0731 21:37:52.712478    4664 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 21:37:52.718552    4664 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 21:37:52.725295    4664 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, metrics-server, helm-tiller, ingress-dns, yakd, volcano, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0731 21:37:52.731647    4664 addons.go:510] duration metric: took 4m3.606072s for enable addons: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin metrics-server helm-tiller ingress-dns yakd volcano storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0731 21:37:52.731647    4664 start.go:246] waiting for cluster config update ...
	I0731 21:37:52.731647    4664 start.go:255] writing updated cluster config ...
	I0731 21:37:52.745450    4664 ssh_runner.go:195] Run: rm -f paused
	I0731 21:37:53.016809    4664 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:37:53.022271    4664 out.go:177] * Done! kubectl is now configured to use "addons-608900" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 31 21:40:26 addons-608900 dockerd[1430]: time="2024-07-31T21:40:26.637333924Z" level=info msg="shim disconnected" id=13eb2f83aefdc775e94c67645618f73858ca1de7faefdf95438eaadd1301ef26 namespace=moby
	Jul 31 21:40:26 addons-608900 dockerd[1430]: time="2024-07-31T21:40:26.637494025Z" level=warning msg="cleaning up after shim disconnected" id=13eb2f83aefdc775e94c67645618f73858ca1de7faefdf95438eaadd1301ef26 namespace=moby
	Jul 31 21:40:26 addons-608900 dockerd[1430]: time="2024-07-31T21:40:26.637546926Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:40:26 addons-608900 dockerd[1430]: time="2024-07-31T21:40:26.774572163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:40:26 addons-608900 dockerd[1430]: time="2024-07-31T21:40:26.775545673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:40:26 addons-608900 dockerd[1430]: time="2024-07-31T21:40:26.775767975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:40:26 addons-608900 dockerd[1430]: time="2024-07-31T21:40:26.776280280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.051893244Z" level=info msg="shim disconnected" id=78b5d84223e1acecd411bb57274740fdcb6a345795b5fa68d45d1bf7a9022b46 namespace=moby
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.052815553Z" level=warning msg="cleaning up after shim disconnected" id=78b5d84223e1acecd411bb57274740fdcb6a345795b5fa68d45d1bf7a9022b46 namespace=moby
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.052971954Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:40:27 addons-608900 dockerd[1424]: time="2024-07-31T21:40:27.053249857Z" level=info msg="ignoring event" container=78b5d84223e1acecd411bb57274740fdcb6a345795b5fa68d45d1bf7a9022b46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.082446527Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:40:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:40:27 addons-608900 cri-dockerd[1325]: time="2024-07-31T21:40:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ca0142bfb2f7ce891e50f478a07a6a3e6d9f519df8cb6b257c5157a01e3dbffd/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.538067240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.538157341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.538171241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.538733146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:40:27 addons-608900 dockerd[1424]: time="2024-07-31T21:40:27.666510228Z" level=info msg="ignoring event" container=a7c4a8c38044c788a406d5c9b09daebbf0e84c028022873a680d734a0d8c2478 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.668091943Z" level=info msg="shim disconnected" id=a7c4a8c38044c788a406d5c9b09daebbf0e84c028022873a680d734a0d8c2478 namespace=moby
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.668551647Z" level=warning msg="cleaning up after shim disconnected" id=a7c4a8c38044c788a406d5c9b09daebbf0e84c028022873a680d734a0d8c2478 namespace=moby
	Jul 31 21:40:27 addons-608900 dockerd[1430]: time="2024-07-31T21:40:27.668626747Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:40:29 addons-608900 dockerd[1424]: time="2024-07-31T21:40:29.432547486Z" level=info msg="ignoring event" container=ca0142bfb2f7ce891e50f478a07a6a3e6d9f519df8cb6b257c5157a01e3dbffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:40:29 addons-608900 dockerd[1430]: time="2024-07-31T21:40:29.433630695Z" level=info msg="shim disconnected" id=ca0142bfb2f7ce891e50f478a07a6a3e6d9f519df8cb6b257c5157a01e3dbffd namespace=moby
	Jul 31 21:40:29 addons-608900 dockerd[1430]: time="2024-07-31T21:40:29.433875497Z" level=warning msg="cleaning up after shim disconnected" id=ca0142bfb2f7ce891e50f478a07a6a3e6d9f519df8cb6b257c5157a01e3dbffd namespace=moby
	Jul 31 21:40:29 addons-608900 dockerd[1430]: time="2024-07-31T21:40:29.433902497Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	a7c4a8c38044c       a416a98b71e22                                                                                                                                3 seconds ago        Exited              helper-pod                               0                   ca0142bfb2f7c       helper-pod-delete-pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133
	6a39bdf149c95       busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7                                                              19 seconds ago       Exited              busybox                                  0                   9f37c272c43af       test-local-path
	3cfed9f1a266e       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              27 seconds ago       Exited              helper-pod                               0                   624ad9f454a50       helper-pod-create-pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133
	a1c88dac3988e       gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b                                          43 seconds ago       Exited              registry-test                            0                   ba79fcca15085       registry-test
	0c09d97bcedf6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          About a minute ago   Running             busybox                                  0                   fb814fad964c3       busybox
	2a0f40816d884       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago        Running             csi-snapshotter                          0                   68afd606f3d13       csi-hostpathplugin-zr957
	626c3d884afae       registry.k8s.io/ingress-nginx/controller@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a                             3 minutes ago        Running             controller                               0                   a068688581d2d       ingress-nginx-controller-6d9bd977d4-6ztjb
	c495d524fa3ff       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          3 minutes ago        Running             csi-provisioner                          0                   68afd606f3d13       csi-hostpathplugin-zr957
	9481c690e6cd5       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            3 minutes ago        Running             liveness-probe                           0                   68afd606f3d13       csi-hostpathplugin-zr957
	b8889ffba2912       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           3 minutes ago        Running             hostpath                                 0                   68afd606f3d13       csi-hostpathplugin-zr957
	53b7a14343f61       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                3 minutes ago        Running             node-driver-registrar                    0                   68afd606f3d13       csi-hostpathplugin-zr957
	a222763913755       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   4 minutes ago        Running             csi-external-health-monitor-controller   0                   68afd606f3d13       csi-hostpathplugin-zr957
	e57c7fb9ca687       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              4 minutes ago        Running             csi-resizer                              0                   2d5680d8505da       csi-hostpath-resizer-0
	61fbf2c6c378f       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             4 minutes ago        Running             csi-attacher                             0                   3f64673526566       csi-hostpath-attacher-0
	b493380b2a633       684c5ea3b61b2                                                                                                                                4 minutes ago        Exited              patch                                    1                   afc912772e0b0       ingress-nginx-admission-patch-njdcl
	271f6ca01f2ef       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   4 minutes ago        Exited              create                                   0                   5244f4ec63729       ingress-nginx-admission-create-glflk
	87145418c2f23       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago        Running             volume-snapshot-controller               0                   4151830f9ad15       snapshot-controller-745499f584-drv5b
	36ded5ca1cab0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago        Running             volume-snapshot-controller               0                   e419a228c3b13       snapshot-controller-745499f584-skblq
	1b65c2c4c30ce       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       4 minutes ago        Running             local-path-provisioner                   0                   fdeac0890a032       local-path-provisioner-8d985888d-2rvw6
	6d851ff666dbf       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        4 minutes ago        Running             yakd                                     0                   06aac2f4462d9       yakd-dashboard-799879c74f-9wm7v
	d007c0160e237       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             5 minutes ago        Running             minikube-ingress-dns                     0                   cd96ee8d82217       kube-ingress-dns-minikube
	2cc808d509c6b       gcr.io/cloud-spanner-emulator/emulator@sha256:ea3a9e70a98bf648952401e964c5403d93e980837acf924288df19e0077ae7fb                               5 minutes ago        Running             cloud-spanner-emulator                   0                   eb74e3f45f587       cloud-spanner-emulator-5455fb9b69-bk296
	23e029bf107e0       nvcr.io/nvidia/k8s-device-plugin@sha256:89612c7851300ddeed218b9df0dcb33bbb8495282aa17c554038e52387ce7f1e                                     5 minutes ago        Running             nvidia-device-plugin-ctr                 0                   7b19e02b17441       nvidia-device-plugin-daemonset-5z9jr
	da71b9e468abd       6e38f40d628db                                                                                                                                6 minutes ago        Running             storage-provisioner                      0                   92adfe41d074c       storage-provisioner
	87504dbe6566c       cbb01a7bd410d                                                                                                                                6 minutes ago        Running             coredns                                  0                   0b55eb12268c1       coredns-7db6d8ff4d-ptk8d
	6111e4e7d0023       55bb025d2cfa5                                                                                                                                6 minutes ago        Running             kube-proxy                               0                   f1785554ed97e       kube-proxy-n29jx
	c8ff160b3a160       3edc18e7b7672                                                                                                                                7 minutes ago        Running             kube-scheduler                           0                   b4f1ebc358936       kube-scheduler-addons-608900
	67bfe5754edd6       76932a3b37d7e                                                                                                                                7 minutes ago        Running             kube-controller-manager                  0                   e5226eb941524       kube-controller-manager-addons-608900
	bbb9e427db20a       3861cfcd7c04c                                                                                                                                7 minutes ago        Running             etcd                                     0                   ad5191c8206b9       etcd-addons-608900
	a0166206973c9       1f6d574d502f3                                                                                                                                7 minutes ago        Running             kube-apiserver                           0                   1b1087dd563c7       kube-apiserver-addons-608900
	
	
	==> controller_ingress [626c3d884afa] <==
	W0731 21:36:59.972647       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0731 21:36:59.972985       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0731 21:36:59.980686       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.3" state="clean" commit="6fc0a69044f1ac4c13841ec4391224a2df241460" platform="linux/amd64"
	I0731 21:37:00.110678       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0731 21:37:00.138876       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0731 21:37:00.155855       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0731 21:37:00.177056       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5548b48c-2670-41f4-9afd-87a6cd3a53a0", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0731 21:37:00.181725       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"7cdf26b8-0596-4102-9ab3-410ebb47b117", APIVersion:"v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0731 21:37:00.182323       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"78d32f7f-dd7b-4a6c-9b1c-bcd57b9d16f7", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0731 21:37:01.358000       7 nginx.go:317] "Starting NGINX process"
	I0731 21:37:01.358292       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0731 21:37:01.359000       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0731 21:37:01.359219       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0731 21:37:01.402972       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0731 21:37:01.403372       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-6d9bd977d4-6ztjb"
	I0731 21:37:01.431710       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6d9bd977d4-6ztjb" node="addons-608900"
	I0731 21:37:01.483059       7 controller.go:213] "Backend successfully reloaded"
	I0731 21:37:01.483178       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0731 21:37:01.483638       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6d9bd977d4-6ztjb", UID:"1b0c634b-4443-484c-95c4-528fe0362612", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         7c44f992012555ff7f4e47c08d7c542ca9b4b1f7
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [87504dbe6566] <==
	[INFO] 10.244.0.5:49418 - 47136 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000232601s
	[INFO] 10.244.0.5:48478 - 42272 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000149101s
	[INFO] 10.244.0.5:48478 - 65314 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093601s
	[INFO] 10.244.0.5:59927 - 24296 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001257s
	[INFO] 10.244.0.5:59927 - 13546 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000913s
	[INFO] 10.244.0.5:59513 - 14037 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000110301s
	[INFO] 10.244.0.5:59513 - 23723 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000454002s
	[INFO] 10.244.0.5:45177 - 37053 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001789s
	[INFO] 10.244.0.5:45177 - 55230 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000044701s
	[INFO] 10.244.0.5:47907 - 15605 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000575s
	[INFO] 10.244.0.5:47907 - 18167 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000548s
	[INFO] 10.244.0.5:38488 - 60261 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000199601s
	[INFO] 10.244.0.5:38488 - 46435 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000143801s
	[INFO] 10.244.0.5:53059 - 20122 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001351s
	[INFO] 10.244.0.5:53059 - 13464 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000058501s
	[INFO] 10.244.0.26:60531 - 22743 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000349302s
	[INFO] 10.244.0.26:60107 - 5151 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000235602s
	[INFO] 10.244.0.26:35340 - 51453 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0000907s
	[INFO] 10.244.0.26:53886 - 35872 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143301s
	[INFO] 10.244.0.26:33681 - 50840 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000205801s
	[INFO] 10.244.0.26:37719 - 26468 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000413402s
	[INFO] 10.244.0.26:41820 - 62397 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.001645809s
	[INFO] 10.244.0.26:35537 - 63684 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002150212s
	[INFO] 10.244.0.29:34184 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000397904s
	[INFO] 10.244.0.29:43888 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000097501s
	
	
	==> describe nodes <==
	Name:               addons-608900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-608900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=addons-608900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_33_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-608900
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-608900"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:33:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-608900
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:40:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:40:12 +0000   Wed, 31 Jul 2024 21:33:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:40:12 +0000   Wed, 31 Jul 2024 21:33:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:40:12 +0000   Wed, 31 Jul 2024 21:33:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:40:12 +0000   Wed, 31 Jul 2024 21:33:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.25.32
	  Hostname:    addons-608900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 ed34fe260d2149ec9d25bdc4eb82462d
	  System UUID:                56c18599-0099-5c48-b0c1-576a8a13ea71
	  Boot ID:                    faee4ddc-8704-48a8-b4ee-bea4eff05568
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  default                     cloud-spanner-emulator-5455fb9b69-bk296      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-6ztjb    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         6m6s
	  kube-system                 coredns-7db6d8ff4d-ptk8d                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m41s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 csi-hostpathplugin-zr957                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 etcd-addons-608900                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-apiserver-addons-608900                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	  kube-system                 kube-controller-manager-addons-608900        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-proxy-n29jx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kube-scheduler-addons-608900                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 nvidia-device-plugin-daemonset-5z9jr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 snapshot-controller-745499f584-drv5b         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 snapshot-controller-745499f584-skblq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  local-path-storage          local-path-provisioner-8d985888d-2rvw6       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  yakd-dashboard              yakd-dashboard-799879c74f-9wm7v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m33s                kube-proxy       
	  Normal  Starting                 7m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m3s (x8 over 7m3s)  kubelet          Node addons-608900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m3s (x8 over 7m3s)  kubelet          Node addons-608900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m3s (x7 over 7m3s)  kubelet          Node addons-608900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m55s                kubelet          Node addons-608900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s                kubelet          Node addons-608900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s                kubelet          Node addons-608900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m50s                kubelet          Node addons-608900 status is now: NodeReady
	  Normal  RegisteredNode           6m42s                node-controller  Node addons-608900 event: Registered Node addons-608900 in Controller
	
	
	==> dmesg <==
	[  +5.222348] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.020626] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.134752] kauditd_printk_skb: 72 callbacks suppressed
	[ +12.143668] kauditd_printk_skb: 25 callbacks suppressed
	[ +10.937320] kauditd_printk_skb: 2 callbacks suppressed
	[Jul31 21:35] kauditd_printk_skb: 2 callbacks suppressed
	[Jul31 21:36] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.911754] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.156268] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.257084] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.358233] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.716296] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.331218] kauditd_printk_skb: 39 callbacks suppressed
	[Jul31 21:37] kauditd_printk_skb: 38 callbacks suppressed
	[ +28.076541] kauditd_printk_skb: 73 callbacks suppressed
	[Jul31 21:38] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.403320] kauditd_printk_skb: 2 callbacks suppressed
	[ +21.450229] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.124788] kauditd_printk_skb: 22 callbacks suppressed
	[Jul31 21:39] kauditd_printk_skb: 7 callbacks suppressed
	[ +29.012001] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.678771] kauditd_printk_skb: 66 callbacks suppressed
	[Jul31 21:40] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.659426] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.863108] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [bbb9e427db20] <==
	{"level":"info","ts":"2024-07-31T21:36:49.771437Z","caller":"traceutil/trace.go:171","msg":"trace[1735336145] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1385; }","duration":"247.941144ms","start":"2024-07-31T21:36:49.523486Z","end":"2024-07-31T21:36:49.771427Z","steps":["trace[1735336145] 'agreement among raft nodes before linearized reading'  (duration: 247.766243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:36:49.994284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.306123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14458"}
	{"level":"info","ts":"2024-07-31T21:36:49.995134Z","caller":"traceutil/trace.go:171","msg":"trace[839702023] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1385; }","duration":"171.175227ms","start":"2024-07-31T21:36:49.823941Z","end":"2024-07-31T21:36:49.995116Z","steps":["trace[839702023] 'range keys from in-memory index tree'  (duration: 170.005021ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:36:57.048433Z","caller":"traceutil/trace.go:171","msg":"trace[1808412132] transaction","detail":"{read_only:false; response_revision:1394; number_of_response:1; }","duration":"211.759145ms","start":"2024-07-31T21:36:56.836656Z","end":"2024-07-31T21:36:57.048415Z","steps":["trace[1808412132] 'process raft request'  (duration: 205.385211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:36:58.331205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.808148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2024-07-31T21:36:58.331362Z","caller":"traceutil/trace.go:171","msg":"trace[1573766322] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1395; }","duration":"193.959149ms","start":"2024-07-31T21:36:58.137354Z","end":"2024-07-31T21:36:58.331314Z","steps":["trace[1573766322] 'range keys from in-memory index tree'  (duration: 193.694047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:36:58.332142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.931378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:36:58.332195Z","caller":"traceutil/trace.go:171","msg":"trace[1512377635] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1395; }","duration":"107.000179ms","start":"2024-07-31T21:36:58.225187Z","end":"2024-07-31T21:36:58.332187Z","steps":["trace[1512377635] 'range keys from in-memory index tree'  (duration: 106.898978ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:36:58.333813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.985675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86340"}
	{"level":"info","ts":"2024-07-31T21:36:58.333966Z","caller":"traceutil/trace.go:171","msg":"trace[602418971] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1395; }","duration":"125.165977ms","start":"2024-07-31T21:36:58.208791Z","end":"2024-07-31T21:36:58.333957Z","steps":["trace[602418971] 'range keys from in-memory index tree'  (duration: 124.680975ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:36:58.485466Z","caller":"traceutil/trace.go:171","msg":"trace[1677996348] transaction","detail":"{read_only:false; response_revision:1396; number_of_response:1; }","duration":"141.222663ms","start":"2024-07-31T21:36:58.344224Z","end":"2024-07-31T21:36:58.485447Z","steps":["trace[1677996348] 'process raft request'  (duration: 140.912561ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:36:59.438405Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.11686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14458"}
	{"level":"info","ts":"2024-07-31T21:36:59.43851Z","caller":"traceutil/trace.go:171","msg":"trace[206953579] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1397; }","duration":"122.238461ms","start":"2024-07-31T21:36:59.316258Z","end":"2024-07-31T21:36:59.438497Z","steps":["trace[206953579] 'range keys from in-memory index tree'  (duration: 122.02506ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:36:59.438815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.399384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:36:59.439613Z","caller":"traceutil/trace.go:171","msg":"trace[363055155] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1397; }","duration":"201.231888ms","start":"2024-07-31T21:36:59.238369Z","end":"2024-07-31T21:36:59.439601Z","steps":["trace[363055155] 'range keys from in-memory index tree'  (duration: 200.216882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:36:59.438434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.736747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86340"}
	{"level":"info","ts":"2024-07-31T21:36:59.442964Z","caller":"traceutil/trace.go:171","msg":"trace[391744304] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1397; }","duration":"235.305272ms","start":"2024-07-31T21:36:59.207649Z","end":"2024-07-31T21:36:59.442954Z","steps":["trace[391744304] 'range keys from in-memory index tree'  (duration: 230.225345ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:36:59.700896Z","caller":"traceutil/trace.go:171","msg":"trace[222463984] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"154.117333ms","start":"2024-07-31T21:36:59.546705Z","end":"2024-07-31T21:36:59.700823Z","steps":["trace[222463984] 'process raft request'  (duration: 152.951526ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:37:51.768779Z","caller":"traceutil/trace.go:171","msg":"trace[1346320054] transaction","detail":"{read_only:false; response_revision:1567; number_of_response:1; }","duration":"167.965832ms","start":"2024-07-31T21:37:51.600795Z","end":"2024-07-31T21:37:51.768761Z","steps":["trace[1346320054] 'process raft request'  (duration: 167.726031ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:38:16.393914Z","caller":"traceutil/trace.go:171","msg":"trace[1593594575] linearizableReadLoop","detail":"{readStateIndex:1738; appliedIndex:1737; }","duration":"357.166897ms","start":"2024-07-31T21:38:16.03673Z","end":"2024-07-31T21:38:16.393897Z","steps":["trace[1593594575] 'read index received'  (duration: 356.968696ms)","trace[1593594575] 'applied index is now lower than readState.Index'  (duration: 197.501µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T21:38:16.394152Z","caller":"traceutil/trace.go:171","msg":"trace[1458513407] transaction","detail":"{read_only:false; response_revision:1656; number_of_response:1; }","duration":"429.066421ms","start":"2024-07-31T21:38:15.965051Z","end":"2024-07-31T21:38:16.394117Z","steps":["trace[1458513407] 'process raft request'  (duration: 428.683518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:38:16.394193Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"357.4882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3731"}
	{"level":"warn","ts":"2024-07-31T21:38:16.394256Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T21:38:15.964959Z","time spent":"429.226622ms","remote":"127.0.0.1:50926","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1654 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-31T21:38:16.394267Z","caller":"traceutil/trace.go:171","msg":"trace[1968156645] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1656; }","duration":"357.598301ms","start":"2024-07-31T21:38:16.036661Z","end":"2024-07-31T21:38:16.394259Z","steps":["trace[1968156645] 'agreement among raft nodes before linearized reading'  (duration: 357.404499ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:38:16.394312Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T21:38:16.036651Z","time spent":"357.653101ms","remote":"127.0.0.1:50948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":3754,"request content":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" "}
	
	
	==> kernel <==
	 21:40:30 up 9 min,  0 users,  load average: 2.04, 2.04, 1.14
	Linux addons-608900 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a0166206973c] <==
	I0731 21:38:08.869238       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0731 21:38:45.715176       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0731 21:38:45.935421       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	E0731 21:38:46.421611       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-controllers\" not found]"
	I0731 21:38:46.703043       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0731 21:38:46.775674       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0731 21:38:46.847168       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0731 21:38:46.860259       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W0731 21:38:47.112138       1 cacher.go:168] Terminating all watchers from cacher commands.bus.volcano.sh
	I0731 21:38:47.286214       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0731 21:38:47.343476       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0731 21:38:47.555231       1 cacher.go:168] Terminating all watchers from cacher jobs.batch.volcano.sh
	I0731 21:38:47.629651       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0731 21:38:47.860604       1 cacher.go:168] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0731 21:38:47.925995       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0731 21:38:47.991501       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0731 21:38:48.688404       1 cacher.go:168] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0731 21:38:48.784045       1 cacher.go:168] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0731 21:39:04.476402       1 conn.go:339] Error on socket receive: read tcp 172.17.25.32:8443->172.17.16.1:53188: use of closed network connection
	E0731 21:39:04.934556       1 conn.go:339] Error on socket receive: read tcp 172.17.25.32:8443->172.17.16.1:53191: use of closed network connection
	E0731 21:39:05.218997       1 conn.go:339] Error on socket receive: read tcp 172.17.25.32:8443->172.17.16.1:53193: use of closed network connection
	I0731 21:39:54.647420       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 21:39:55.020722       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 21:39:56.107510       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 21:40:14.012886       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [67bfe5754edd] <==
	W0731 21:39:59.361782       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:39:59.362042       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 21:40:00.310062       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:40:00.310136       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 21:40:01.413388       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:40:01.413498       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 21:40:03.907944       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:40:03.908086       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 21:40:05.281750       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0731 21:40:06.123967       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:40:06.124004       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 21:40:07.927939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-698f998955" duration="5.7µs"
	W0731 21:40:11.329180       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:40:11.329283       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 21:40:13.832193       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:40:13.832260       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 21:40:13.921224       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:40:13.921392       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 21:40:19.046238       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0731 21:40:19.046298       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:40:19.217229       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0731 21:40:19.217528       1 shared_informer.go:320] Caches are synced for garbage collector
	W0731 21:40:20.763226       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 21:40:20.763292       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 21:40:26.513315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="73.801µs"
	
	
	==> kube-proxy [6111e4e7d002] <==
	I0731 21:33:56.390343       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:33:56.498136       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.25.32"]
	I0731 21:33:57.290538       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:33:57.291539       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:33:57.291931       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:33:57.351958       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:33:57.352660       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:33:57.352709       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:33:57.357606       1 config.go:192] "Starting service config controller"
	I0731 21:33:57.357713       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:33:57.357760       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:33:57.357770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:33:57.359270       1 config.go:319] "Starting node config controller"
	I0731 21:33:57.363191       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:33:57.465676       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:33:57.466605       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:33:57.468441       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c8ff160b3a16] <==
	W0731 21:33:32.986806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 21:33:32.986862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 21:33:32.988586       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 21:33:32.988630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 21:33:33.009644       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:33.009735       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:33.051184       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 21:33:33.051288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 21:33:33.067441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 21:33:33.067551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 21:33:33.118092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 21:33:33.118240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 21:33:33.131351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 21:33:33.131669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 21:33:33.290339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 21:33:33.290542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 21:33:33.321996       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:33.322019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:33.327206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:33.327233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:33.329792       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 21:33:33.329816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:33:33.476729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:33.476866       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 21:33:35.443817       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:40:26 addons-608900 kubelet[2278]: E0731 21:40:26.014350    2278 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="439cfb10-7cca-44fa-b6b9-d103658e4fee" containerName="task-pv-container"
	Jul 31 21:40:26 addons-608900 kubelet[2278]: I0731 21:40:26.014553    2278 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b09c69-35aa-4889-afb0-29839c6495e5" containerName="helm-test"
	Jul 31 21:40:26 addons-608900 kubelet[2278]: I0731 21:40:26.014663    2278 memory_manager.go:354] "RemoveStaleState removing state" podUID="439cfb10-7cca-44fa-b6b9-d103658e4fee" containerName="task-pv-container"
	Jul 31 21:40:26 addons-608900 kubelet[2278]: I0731 21:40:26.014764    2278 memory_manager.go:354] "RemoveStaleState removing state" podUID="25154a9d-77a9-4603-836c-0a9878b07a9c" containerName="busybox"
	Jul 31 21:40:26 addons-608900 kubelet[2278]: I0731 21:40:26.130080    2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-data\") pod \"helper-pod-delete-pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133\" (UID: \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\") " pod="local-path-storage/helper-pod-delete-pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133"
	Jul 31 21:40:26 addons-608900 kubelet[2278]: I0731 21:40:26.130301    2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-script\") pod \"helper-pod-delete-pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133\" (UID: \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\") " pod="local-path-storage/helper-pod-delete-pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133"
	Jul 31 21:40:26 addons-608900 kubelet[2278]: I0731 21:40:26.130366    2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5vtt\" (UniqueName: \"kubernetes.io/projected/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-kube-api-access-r5vtt\") pod \"helper-pod-delete-pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133\" (UID: \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\") " pod="local-path-storage/helper-pod-delete-pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133"
	Jul 31 21:40:27 addons-608900 kubelet[2278]: I0731 21:40:27.122232    2278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25154a9d-77a9-4603-836c-0a9878b07a9c" path="/var/lib/kubelet/pods/25154a9d-77a9-4603-836c-0a9878b07a9c/volumes"
	Jul 31 21:40:27 addons-608900 kubelet[2278]: I0731 21:40:27.170923    2278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0142bfb2f7ce891e50f478a07a6a3e6d9f519df8cb6b257c5157a01e3dbffd"
	Jul 31 21:40:27 addons-608900 kubelet[2278]: I0731 21:40:27.346470    2278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7sq\" (UniqueName: \"kubernetes.io/projected/94c0ae96-47c8-47c6-b205-6927ea9a5d0c-kube-api-access-ht7sq\") pod \"94c0ae96-47c8-47c6-b205-6927ea9a5d0c\" (UID: \"94c0ae96-47c8-47c6-b205-6927ea9a5d0c\") "
	Jul 31 21:40:27 addons-608900 kubelet[2278]: I0731 21:40:27.357638    2278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c0ae96-47c8-47c6-b205-6927ea9a5d0c-kube-api-access-ht7sq" (OuterVolumeSpecName: "kube-api-access-ht7sq") pod "94c0ae96-47c8-47c6-b205-6927ea9a5d0c" (UID: "94c0ae96-47c8-47c6-b205-6927ea9a5d0c"). InnerVolumeSpecName "kube-api-access-ht7sq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 21:40:27 addons-608900 kubelet[2278]: I0731 21:40:27.447476    2278 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ht7sq\" (UniqueName: \"kubernetes.io/projected/94c0ae96-47c8-47c6-b205-6927ea9a5d0c-kube-api-access-ht7sq\") on node \"addons-608900\" DevicePath \"\""
	Jul 31 21:40:28 addons-608900 kubelet[2278]: I0731 21:40:28.214689    2278 scope.go:117] "RemoveContainer" containerID="13eb2f83aefdc775e94c67645618f73858ca1de7faefdf95438eaadd1301ef26"
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.114606    2278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c0ae96-47c8-47c6-b205-6927ea9a5d0c" path="/var/lib/kubelet/pods/94c0ae96-47c8-47c6-b205-6927ea9a5d0c/volumes"
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.673530    2278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-data\") pod \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\" (UID: \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\") "
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.673599    2278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5vtt\" (UniqueName: \"kubernetes.io/projected/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-kube-api-access-r5vtt\") pod \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\" (UID: \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\") "
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.673648    2278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-script\") pod \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\" (UID: \"7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8\") "
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.674000    2278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-data" (OuterVolumeSpecName: "data") pod "7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8" (UID: "7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.675213    2278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-script" (OuterVolumeSpecName: "script") pod "7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8" (UID: "7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.684441    2278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-kube-api-access-r5vtt" (OuterVolumeSpecName: "kube-api-access-r5vtt") pod "7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8" (UID: "7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8"). InnerVolumeSpecName "kube-api-access-r5vtt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.775072    2278 reconciler_common.go:289] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-data\") on node \"addons-608900\" DevicePath \"\""
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.775172    2278 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r5vtt\" (UniqueName: \"kubernetes.io/projected/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-kube-api-access-r5vtt\") on node \"addons-608900\" DevicePath \"\""
	Jul 31 21:40:29 addons-608900 kubelet[2278]: I0731 21:40:29.775190    2278 reconciler_common.go:289] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8-script\") on node \"addons-608900\" DevicePath \"\""
	Jul 31 21:40:30 addons-608900 kubelet[2278]: I0731 21:40:30.353280    2278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0142bfb2f7ce891e50f478a07a6a3e6d9f519df8cb6b257c5157a01e3dbffd"
	Jul 31 21:40:31 addons-608900 kubelet[2278]: I0731 21:40:31.120775    2278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8" path="/var/lib/kubelet/pods/7e6512f0-49d2-4c13-b49c-6b6bb8ce9ad8/volumes"
	
	
	==> storage-provisioner [da71b9e468ab] <==
	I0731 21:34:19.750378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:34:19.858340       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:34:19.858384       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:34:19.906279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:34:19.910152       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef30fc3b-06d4-4d30-9a7f-56885b14def0", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-608900_ce591ac7-bfb3-4dca-bf89-9eab0b37c9ad became leader
	I0731 21:34:19.910373       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-608900_ce591ac7-bfb3-4dca-bf89-9eab0b37c9ad!
	I0731 21:34:20.011001       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-608900_ce591ac7-bfb3-4dca-bf89-9eab0b37c9ad!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:40:21.463775    4568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-608900 -n addons-608900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-608900 -n addons-608900: (13.4813693s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-608900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-glflk ingress-nginx-admission-patch-njdcl
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-608900 describe pod ingress-nginx-admission-create-glflk ingress-nginx-admission-patch-njdcl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-608900 describe pod ingress-nginx-admission-create-glflk ingress-nginx-admission-patch-njdcl: exit status 1 (182.4403ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-glflk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-njdcl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-608900 describe pod ingress-nginx-admission-create-glflk ingress-nginx-admission-patch-njdcl: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.96s)

                                                
                                    
x
+
TestDockerFlags (10800.595s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-196000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestDockerFlags (1m21s)
	TestForceSystemdFlag (7m55s)
	TestNetworkPlugins (8m11s)
	TestPause (13m5s)
	TestPause/serial (13m5s)
	TestPause/serial/SecondStartNoReconfiguration (7m5s)
	TestRunningBinaryUpgrade (13m5s)
	TestStartStop (13m5s)

                                                
                                                
goroutine 2189 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 8 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000a009c0, 0xc00122dbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007802a0, {0x4a000e0, 0x2a, 0x2a}, {0x2652fa7?, 0x4880cf?, 0x4a23500?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0008f1860)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0008f1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000070400)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 127 [chan receive, 172 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00093f900, 0xc00013c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 125
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 139 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00093f850, 0x3b)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x20e9f80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001680960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00093f900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004dc000, {0x3633a60, 0xc000af2240}, 0x1, 0xc00013c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004dc000, 0x3b9aca00, 0x0, 0x1, 0xc00013c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 127
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2102 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1f1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b1f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000b1f1e0, 0xc000215cc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2099
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 683 [syscall, 8 minutes, locked to thread]:
syscall.SyscallN(0x7ff823554e10?, {0xc00148ba80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x378, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0017331a0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0013c7800)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0013c7800)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000b1e9c0, 0xc0013c7800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc000b1e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:91 +0x347
testing.tRunner(0xc000b1e9c0, 0x30db6d0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2120 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a01d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a01d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000a01d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000a01d40, 0xc001236300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 69 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 42
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 140 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3657660, 0xc00013c000}, 0xc0012c7f50, 0xc0012c7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3657660, 0xc00013c000}, 0x90?, 0xc0012c7f50, 0xc0012c7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3657660?, 0xc00013c000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012c7fd0?, 0x55e4a4?, 0xc0007e4540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 127
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 126 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001680a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 125
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 681 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1e680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc000b1e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc000b1e680, 0x30db688)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 141 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 140
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2122 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008f2820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008f2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008f2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008f2820, 0xc001236500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 680 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1e4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc000b1e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc000b1e4e0, 0x30db690)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2013 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0004f8680, {0x25f6e54?, 0x43f4ad?}, 0xc00050e0c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0004f8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0004f8680, 0x30db770)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2104 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1f520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b1f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000b1f520, 0xc000215e80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2099
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2101 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1f040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b1f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000b1f040, 0xc0002156c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2099
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 682 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ff823554e10?, {0xc001225868?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x4b0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001732420)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001570000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001570000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000b1e820, 0xc001570000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestDockerFlags(0xc000b1e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:51 +0x489
testing.tRunner(0xc000b1e820, 0x30db6a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2099 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000b1e000, 0x30db990)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2061
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2127 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008f3380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008f3380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008f3380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008f3380, 0xc001236980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 941 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 940
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2105 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1f6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b1f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000b1f6c0, 0xc000215f40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2099
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2103 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1f380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b1f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000b1f380, 0xc000215e00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2099
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2123 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008f2b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008f2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008f2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008f2b60, 0xc001236580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 940 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3657660, 0xc00013c000}, 0xc001d73f50, 0xc001d73f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3657660, 0xc00013c000}, 0x53?, 0xc001d73f50, 0xc001d73f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3657660?, 0xc00013c000?}, 0xc0008d4010?, 0xc001d73fd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4aae480?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2184 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x3e7ec5?, {0xc001765b20?, 0x231fc98?, 0xc001765b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x3dfdf6?, 0x4ab0960?, 0xc001765bf8?, 0x3d29a5?, 0x0?, 0x0?, 0x15700000000?, 0xc0000a67b0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3b8, {0xc00122820d?, 0x1df3, 0x4841df?}, 0xc001765c48?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001532788?, {0xc00122820d?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001532788, {0xc00122820d, 0x1df3, 0x1df3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00067e070, {0xc00122820d?, 0x5?, 0x1e35?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018b01e0, {0x36325e0, 0xc0000a60c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3632720, 0xc0018b01e0}, {0x36325e0, 0xc0000a60c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3632720, 0xc0018b01e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x49b3c20?, {0x3632720?, 0xc0018b01e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3632720, 0xc0018b01e0}, {0x36326a0, 0xc00067e070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x30db798?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2063
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2183 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x3e7ec5?, {0xc001d71b20?, 0x231fc98?, 0xc001d71b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x49cf920?, 0x4ab0960?, 0xc001d71bf8?, 0x3d283b?, 0x157d8420598?, 0x434341?, 0x3c8ba6?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7b8, {0xc00189326f?, 0x591, 0x4841df?}, 0xc001d71c28?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001532288?, {0xc00189326f?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001532288, {0xc00189326f, 0x591, 0x591})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00067e038, {0xc00189326f?, 0xc001d71d98?, 0x20c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018b01b0, {0x36325e0, 0xc0007e0010})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3632720, 0xc0018b01b0}, {0x36325e0, 0xc0007e0010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3632720, 0xc0018b01b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x49b3c20?, {0x3632720?, 0xc0018b01b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3632720, 0xc0018b01b0}, {0x36326a0, 0xc00067e038}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2063
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2121 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1fa00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b1fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b1fa00, 0xc001236480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2100 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b1eea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b1eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b1eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000b1eea0, 0xc000215680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2099
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1212 [chan send, 135 minutes]:
os/exec.(*Cmd).watchCtx(0xc000003e00, 0xc00013de60)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 847
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2118 [chan receive, 8 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000a01a00, 0xc00050e0c0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2013
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2065 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0004f9d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0004f9d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0004f9d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc0004f9d40, 0x30db738)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1036 [chan send, 139 minutes]:
os/exec.(*Cmd).watchCtx(0xc001628300, 0xc001636960)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1035
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 760 [IO wait, 160 minutes]:
internal/poll.runtime_pollWait(0x157fdb11ea8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000580408?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00148e7a0, 0xc00130dbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc00148e788, 0x2dc, {0xc00075c5a0?, 0x0?, 0x0?}, 0xc000580008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc00148e788, 0xc00130dd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc00148e788)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0003d6500)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0003d6500)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0005060f0, {0x364a720, 0xc0003d6500})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0005060f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc000a016c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 757
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 939 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000b19b10, 0x32)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x20e9f80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001681a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b19b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004e9960, {0x3633a60, 0xc000520570}, 0x1, 0xc00013c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004e9960, 0x3b9aca00, 0x0, 0x1, 0xc00013c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2119 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a01ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a01ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000a01ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000a01ba0, 0xc001236280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2015 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0004f8b60, {0x25f8371?, 0xd18c2e2800?}, 0xc00136a690)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc0004f8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc0004f8b60, 0x30db788)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 906 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001681b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 888
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 907 [chan receive, 139 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b19b40, 0xc00013c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 888
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2185 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00165e000, 0xc001670060)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2063
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2221 [syscall, 8 minutes, locked to thread]:
syscall.SyscallN(0x7ff823554e10?, {0xc001519a10?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x528, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0017b7350)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00165ed80)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc00165ed80)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0004f8820, 0xc00165ed80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartNoReconfigure({0x36574a0, 0xc000788f50}, 0xc0004f8820, {0xc0004ac480?, 0xc0379e7c58?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:92 +0x245
k8s.io/minikube/test/integration.TestPause.func1.1(0xc0004f8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc0004f8820, 0xc001634400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2077
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2186 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x3e7ec5?, {0xc00143db20?, 0x235f8f8?, 0xc00143db58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x3dfdf6?, 0x4ab0960?, 0xc00143dbf8?, 0x3d283b?, 0x0?, 0x0?, 0x3c8ba6?, 0x4f0047005f004b?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x404, {0xc0018f05fe?, 0x202, 0x4841df?}, 0x6f006400200036?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000404a08?, {0xc0018f05fe?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000404a08, {0xc0018f05fe, 0x202, 0x202})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6728, {0xc0018f05fe?, 0xc00143dd98?, 0x6f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015ea0c0, {0x36325e0, 0xc0007e0188})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3632720, 0xc0015ea0c0}, {0x36325e0, 0xc0007e0188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3632720, 0xc0015ea0c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x49b3c20?, {0x3632720?, 0xc0015ea0c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3632720, 0xc0015ea0c0}, {0x36326a0, 0xc0000a6728}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x3b006e00690062?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 682
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2124 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008f2d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008f2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008f2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008f2d00, 0xc001236680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2188 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001570000, 0xc000054480)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 682
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2125 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008f2ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008f2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008f2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008f2ea0, 0xc001236700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2126 [chan receive, 8 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008f31e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008f31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008f31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008f31e0, 0xc001236800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2118
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2205 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc0013c7800, 0xc0014fdec0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 683
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2203 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc001767b20?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0x0?, 0xc001767bf8?, 0x3d283b?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x4fc, {0xc001892b8a?, 0x476, 0x4841df?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00173f188?, {0xc001892b8a?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00173f188, {0xc001892b8a, 0x476, 0x476})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b03170, {0xc001892b8a?, 0x157fda95c98?, 0x210?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015ea600, {0x36325e0, 0xc00067e3f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3632720, 0xc0015ea600}, {0x36325e0, 0xc00067e3f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3632720, 0xc0015ea600})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x49b3c20?, {0x3632720?, 0xc0015ea600?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3632720, 0xc0015ea600}, {0x36326a0, 0xc000b03170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 683
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2064 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007b3400)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0004f9ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0004f9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0004f9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0004f9ba0, 0x30db7c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2061 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0004f96c0, {0x25f6e54?, 0x5173d3?}, 0x30db990)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0004f96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0004f96c0, 0x30db7b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2187 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x3e7ec5?, {0xc001333b20?, 0x235f8f8?, 0xc001333b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x3dfdf6?, 0x4ab0960?, 0xc001333bf8?, 0x3d29a5?, 0x0?, 0x0?, 0x0?, 0x1?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3d8, {0xc0018f3d85?, 0x27b, 0x4841df?}, 0x36574a0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000404f08?, {0xc0018f3d85?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000404f08, {0xc0018f3d85, 0x27b, 0x27b})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6750, {0xc0018f3d85?, 0x157fd841228?, 0xe7f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015ea0f0, {0x36325e0, 0xc000140278})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3632720, 0xc0015ea0f0}, {0x36325e0, 0xc000140278}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001333e78?, {0x3632720, 0xc0015ea0f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x49b3c20?, {0x3632720?, 0xc0015ea0f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3632720, 0xc0015ea0f0}, {0x36326a0, 0xc0000a6750}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0016700c0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 682
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2077 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0008f24e0, {0x26361cb?, 0x24?}, 0xc001634400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0008f24e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0008f24e0, 0xc00136a690)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2015
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2063 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ff823554e10?, {0xc001221960?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x724, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0017b65d0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00165e000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc00165e000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0004f9a00, 0xc00165e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0004f9a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc0004f9a00, 0x30db798)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2204 [syscall, locked to thread]:
syscall.SyscallN(0x3e7ec5?, {0xc00160bb20?, 0x232e028?, 0xc00160bb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x49cf920?, 0x4ab0960?, 0xc00160bbf8?, 0x3d29a5?, 0x0?, 0x20000?, 0x1?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x77c, {0xc0013fafca?, 0x3036, 0x4841df?}, 0xc00160bc04?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00173f688?, {0xc0013fafca?, 0x3194?, 0x3194?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00173f688, {0xc0013fafca, 0x3036, 0x3036})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b031a8, {0xc0013fafca?, 0xc00160bd98?, 0x10000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015ea630, {0x36325e0, 0xc000140cd0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3632720, 0xc0015ea630}, {0x36325e0, 0xc000140cd0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3632720, 0xc0015ea630})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x49b3c20?, {0x3632720?, 0xc0015ea630?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3632720, 0xc0015ea630}, {0x36326a0, 0xc000b031a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 683
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2222 [syscall, locked to thread]:
syscall.SyscallN(0x3e7ec5?, {0xc001507b20?, 0x232e028?, 0xc001507b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x3dfdf6?, 0x4ab0960?, 0xc001507bf8?, 0x3d29a5?, 0x157d8420108?, 0xc0004f884d?, 0x260b139?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x54c, {0xc00189221b?, 0x5e5, 0x0?}, 0x10?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0017c1688?, {0xc00189221b?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0017c1688, {0xc00189221b, 0x5e5, 0x5e5})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00067e440, {0xc00189221b?, 0x34e?, 0x21b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018b0150, {0x36325e0, 0xc000b031d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3632720, 0xc0018b0150}, {0x36325e0, 0xc000b031d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4925a00?, {0x3632720, 0xc0018b0150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x49b3c20?, {0x3632720?, 0xc0018b0150?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3632720, 0xc0018b0150}, {0x36326a0, 0xc00067e440}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001634400?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2221
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2223 [syscall, locked to thread]:
syscall.SyscallN(0x3e7ec5?, {0xc0018e5b20?, 0x232e028?, 0xc0018e5b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x3dfdf6?, 0x4ab0960?, 0xc0018e5bf8?, 0x3d29a5?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7d0, {0xc0012d1ff2?, 0x800e, 0x4841df?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0017c1b88?, {0xc0012d1ff2?, 0x10000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0017c1b88, {0xc0012d1ff2, 0x800e, 0x800e})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00067e458, {0xc0012d1ff2?, 0x0?, 0x7e77?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018b0180, {0x36325e0, 0xc000140d20})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3632720, 0xc0018b0180}, {0x36325e0, 0xc000140d20}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3632720, 0xc0018b0180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x49b3c20?, {0x3632720?, 0xc0018b0180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3632720, 0xc0018b0180}, {0x36326a0, 0xc00067e458}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2221
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2224 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc00165ed80, 0xc001671140)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2221
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                    
x
+
TestErrorSpam/setup (189s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-642600 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-642600 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 --driver=hyperv: (3m8.9956548s)
error_spam_test.go:96: unexpected stderr: "W0731 21:43:43.369054    5928 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-642600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=19312
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-642600" primary control-plane node in "nospam-642600" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-642600" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0731 21:43:43.369054    5928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (189.00s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (33.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100: (11.9718988s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs -n 25: (8.6661747s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-642600 --log_dir                                     | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-642600 --log_dir                                     | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-642600 --log_dir                                     | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-642600 --log_dir                                     | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                     | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                     | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                     | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-642600                                            | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	| start   | -p functional-457100                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:53 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:53 UTC | 31 Jul 24 21:55 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:56 UTC |
	|         | minikube-local-cache-test:functional-457100                 |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache delete                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | minikube-local-cache-test:functional-457100                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| ssh     | functional-457100 ssh sudo                                  | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-457100                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh                                       | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache reload                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| ssh     | functional-457100 ssh                                       | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-457100 kubectl --                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | --context functional-457100                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:53:24
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:53:24.110391    9988 out.go:291] Setting OutFile to fd 1004 ...
	I0731 21:53:24.111721    9988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:53:24.111721    9988 out.go:304] Setting ErrFile to fd 944...
	I0731 21:53:24.111721    9988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:53:24.136677    9988 out.go:298] Setting JSON to false
	I0731 21:53:24.139919    9988 start.go:129] hostinfo: {"hostname":"minikube6","uptime":538745,"bootTime":1721924058,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 21:53:24.140923    9988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 21:53:24.145795    9988 out.go:177] * [functional-457100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 21:53:24.149799    9988 notify.go:220] Checking for updates...
	I0731 21:53:24.149799    9988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:53:24.152763    9988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:53:24.155326    9988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 21:53:24.157726    9988 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 21:53:24.160711    9988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:53:24.164437    9988 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:53:24.164733    9988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:53:29.366714    9988 out.go:177] * Using the hyperv driver based on existing profile
	I0731 21:53:29.372326    9988 start.go:297] selected driver: hyperv
	I0731 21:53:29.372868    9988 start.go:901] validating driver "hyperv" against &{Name:functional-457100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-457100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:53:29.373240    9988 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:53:29.422895    9988 cni.go:84] Creating CNI manager for ""
	I0731 21:53:29.422982    9988 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:53:29.423277    9988 start.go:340] cluster config:
	{Name:functional-457100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-457100 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:53:29.423925    9988 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:53:29.428151    9988 out.go:177] * Starting "functional-457100" primary control-plane node in "functional-457100" cluster
	I0731 21:53:29.431058    9988 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:53:29.431086    9988 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 21:53:29.431086    9988 cache.go:56] Caching tarball of preloaded images
	I0731 21:53:29.431086    9988 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 21:53:29.431703    9988 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 21:53:29.431725    9988 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\config.json ...
	I0731 21:53:29.433647    9988 start.go:360] acquireMachinesLock for functional-457100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:53:29.433647    9988 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-457100"
	I0731 21:53:29.433647    9988 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:53:29.433647    9988 fix.go:54] fixHost starting: 
	I0731 21:53:29.434648    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:53:32.143282    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:53:32.143359    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:32.143359    9988 fix.go:112] recreateIfNeeded on functional-457100: state=Running err=<nil>
	W0731 21:53:32.143359    9988 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:53:32.150725    9988 out.go:177] * Updating the running hyperv "functional-457100" VM ...
	I0731 21:53:32.155102    9988 machine.go:94] provisionDockerMachine start ...
	I0731 21:53:32.155102    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:53:34.312148    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:53:34.312148    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:34.312262    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:53:36.814418    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:53:36.814515    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:36.820351    9988 main.go:141] libmachine: Using SSH client type: native
	I0731 21:53:36.821062    9988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:53:36.821062    9988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:53:36.952017    9988 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-457100
	
	I0731 21:53:36.952135    9988 buildroot.go:166] provisioning hostname "functional-457100"
	I0731 21:53:36.952135    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:53:39.015925    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:53:39.015925    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:39.016678    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:53:41.515676    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:53:41.515676    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:41.523936    9988 main.go:141] libmachine: Using SSH client type: native
	I0731 21:53:41.523936    9988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:53:41.523936    9988 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-457100 && echo "functional-457100" | sudo tee /etc/hostname
	I0731 21:53:41.689628    9988 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-457100
	
	I0731 21:53:41.689733    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:53:43.781206    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:53:43.781790    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:43.781790    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:53:46.204715    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:53:46.204984    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:46.210029    9988 main.go:141] libmachine: Using SSH client type: native
	I0731 21:53:46.210640    9988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:53:46.210696    9988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-457100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-457100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-457100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:53:46.348418    9988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:53:46.348561    9988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 21:53:46.348561    9988 buildroot.go:174] setting up certificates
	I0731 21:53:46.348672    9988 provision.go:84] configureAuth start
	I0731 21:53:46.348672    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:53:48.444687    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:53:48.445500    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:48.445500    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:53:50.944941    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:53:50.945074    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:50.945074    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:53:53.023541    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:53:53.024056    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:53.024056    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:53:55.461283    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:53:55.462006    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:55.462006    9988 provision.go:143] copyHostCerts
	I0731 21:53:55.462207    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 21:53:55.462561    9988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 21:53:55.462561    9988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 21:53:55.462813    9988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 21:53:55.463723    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 21:53:55.463723    9988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 21:53:55.464308    9988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 21:53:55.464756    9988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 21:53:55.465362    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 21:53:55.466048    9988 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 21:53:55.466048    9988 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 21:53:55.466048    9988 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 21:53:55.467311    9988 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-457100 san=[127.0.0.1 172.17.30.24 functional-457100 localhost minikube]
	I0731 21:53:55.625080    9988 provision.go:177] copyRemoteCerts
	I0731 21:53:55.637490    9988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:53:55.637490    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:53:57.722130    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:53:57.722963    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:53:57.723037    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:00.175083    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:00.175881    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:00.175938    9988 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:54:00.287263    9988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6497157s)
	I0731 21:54:00.287263    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 21:54:00.287263    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:54:00.333358    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 21:54:00.333815    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:54:00.378233    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 21:54:00.378531    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:54:00.418401    9988 provision.go:87] duration metric: took 14.0694905s to configureAuth
	I0731 21:54:00.418442    9988 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:54:00.418539    9988 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:54:00.418539    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:02.523818    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:02.524809    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:02.524866    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:04.986327    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:04.987410    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:04.992689    9988 main.go:141] libmachine: Using SSH client type: native
	I0731 21:54:04.993705    9988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:54:04.993791    9988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 21:54:05.135964    9988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 21:54:05.136077    9988 buildroot.go:70] root file system type: tmpfs
	I0731 21:54:05.136218    9988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 21:54:05.136343    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:07.188078    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:07.188078    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:07.188830    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:09.656392    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:09.656392    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:09.662281    9988 main.go:141] libmachine: Using SSH client type: native
	I0731 21:54:09.662878    9988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:54:09.662985    9988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 21:54:09.823233    9988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 21:54:09.823340    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:11.890710    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:11.891524    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:11.891597    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:14.356019    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:14.356840    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:14.362063    9988 main.go:141] libmachine: Using SSH client type: native
	I0731 21:54:14.362223    9988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:54:14.362223    9988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 21:54:14.502413    9988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:54:14.502802    9988 machine.go:97] duration metric: took 42.3471728s to provisionDockerMachine
	I0731 21:54:14.502802    9988 start.go:293] postStartSetup for "functional-457100" (driver="hyperv")
	I0731 21:54:14.502802    9988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:54:14.515496    9988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:54:14.515496    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:16.561444    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:16.561444    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:16.562125    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:19.042155    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:19.042913    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:19.043478    9988 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:54:19.153332    9988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6377781s)
	I0731 21:54:19.167784    9988 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:54:19.175146    9988 command_runner.go:130] > NAME=Buildroot
	I0731 21:54:19.175146    9988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 21:54:19.175146    9988 command_runner.go:130] > ID=buildroot
	I0731 21:54:19.175146    9988 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 21:54:19.175146    9988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 21:54:19.175146    9988 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:54:19.175146    9988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 21:54:19.175146    9988 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 21:54:19.176658    9988 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 21:54:19.176658    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 21:54:19.178677    9988 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\12332\hosts -> hosts in /etc/test/nested/copy/12332
	I0731 21:54:19.178935    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\12332\hosts -> /etc/test/nested/copy/12332/hosts
	I0731 21:54:19.201389    9988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12332
	I0731 21:54:19.219887    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 21:54:19.265950    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\12332\hosts --> /etc/test/nested/copy/12332/hosts (40 bytes)
	I0731 21:54:19.310195    9988 start.go:296] duration metric: took 4.8073335s for postStartSetup
	I0731 21:54:19.310195    9988 fix.go:56] duration metric: took 49.8759268s for fixHost
	I0731 21:54:19.310195    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:21.416830    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:21.416954    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:21.416954    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:23.847068    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:23.848066    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:23.853678    9988 main.go:141] libmachine: Using SSH client type: native
	I0731 21:54:23.853849    9988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:54:23.853849    9988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:54:23.992572    9988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722462864.013396367
	
	I0731 21:54:23.992680    9988 fix.go:216] guest clock: 1722462864.013396367
	I0731 21:54:23.992680    9988 fix.go:229] Guest: 2024-07-31 21:54:24.013396367 +0000 UTC Remote: 2024-07-31 21:54:19.3101952 +0000 UTC m=+55.359995201 (delta=4.703201167s)
	I0731 21:54:23.992680    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:26.043247    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:26.043247    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:26.043247    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:28.603250    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:28.603476    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:28.611907    9988 main.go:141] libmachine: Using SSH client type: native
	I0731 21:54:28.611907    9988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:54:28.611907    9988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722462863
	I0731 21:54:28.759343    9988 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 21:54:23 UTC 2024
	
	I0731 21:54:28.759343    9988 fix.go:236] clock set: Wed Jul 31 21:54:23 UTC 2024
	 (err=<nil>)
	I0731 21:54:28.759343    9988 start.go:83] releasing machines lock for "functional-457100", held for 59.3249579s
	I0731 21:54:28.759343    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:30.824527    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:30.824527    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:30.825443    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:33.287685    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:33.288984    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:33.292882    9988 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 21:54:33.292986    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:33.303238    9988 ssh_runner.go:195] Run: cat /version.json
	I0731 21:54:33.303238    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:54:35.448470    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:35.448470    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:35.448470    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:35.449184    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:54:35.449220    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:35.449220    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:54:38.100226    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:38.100365    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:38.100365    9988 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:54:38.162898    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:54:38.162898    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:54:38.163340    9988 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:54:38.191425    9988 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 21:54:38.191425    9988 ssh_runner.go:235] Completed: cat /version.json: (4.8881269s)
	I0731 21:54:38.203347    9988 ssh_runner.go:195] Run: systemctl --version
	I0731 21:54:38.256823    9988 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0731 21:54:38.257006    9988 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9640619s)
	I0731 21:54:38.257006    9988 command_runner.go:130] > systemd 252 (252)
	I0731 21:54:38.257178    9988 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	W0731 21:54:38.257006    9988 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 21:54:38.270541    9988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 21:54:38.279149    9988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 21:54:38.280450    9988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:54:38.293699    9988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:54:38.311931    9988 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 21:54:38.312026    9988 start.go:495] detecting cgroup driver to use...
	I0731 21:54:38.312328    9988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:54:38.350507    9988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0731 21:54:38.361603    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0731 21:54:38.363503    9988 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 21:54:38.363503    9988 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 21:54:38.391495    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 21:54:38.409338    9988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 21:54:38.418216    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 21:54:38.446958    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 21:54:38.476991    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 21:54:38.507087    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 21:54:38.537548    9988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:54:38.565769    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 21:54:38.594044    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 21:54:38.627703    9988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 21:54:38.660319    9988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:54:38.677762    9988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 21:54:38.688988    9988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:54:38.721181    9988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:54:38.984478    9988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 21:54:39.017387    9988 start.go:495] detecting cgroup driver to use...
	I0731 21:54:39.030290    9988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 21:54:39.063810    9988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0731 21:54:39.063867    9988 command_runner.go:130] > [Unit]
	I0731 21:54:39.063867    9988 command_runner.go:130] > Description=Docker Application Container Engine
	I0731 21:54:39.063867    9988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0731 21:54:39.063867    9988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0731 21:54:39.063921    9988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0731 21:54:39.063921    9988 command_runner.go:130] > StartLimitBurst=3
	I0731 21:54:39.063921    9988 command_runner.go:130] > StartLimitIntervalSec=60
	I0731 21:54:39.063921    9988 command_runner.go:130] > [Service]
	I0731 21:54:39.063921    9988 command_runner.go:130] > Type=notify
	I0731 21:54:39.063921    9988 command_runner.go:130] > Restart=on-failure
	I0731 21:54:39.063921    9988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0731 21:54:39.063979    9988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0731 21:54:39.064009    9988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0731 21:54:39.064009    9988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0731 21:54:39.064009    9988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0731 21:54:39.064009    9988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0731 21:54:39.064009    9988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0731 21:54:39.064009    9988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0731 21:54:39.064009    9988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0731 21:54:39.064009    9988 command_runner.go:130] > ExecStart=
	I0731 21:54:39.064009    9988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0731 21:54:39.064009    9988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0731 21:54:39.064009    9988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0731 21:54:39.064009    9988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0731 21:54:39.064009    9988 command_runner.go:130] > LimitNOFILE=infinity
	I0731 21:54:39.064009    9988 command_runner.go:130] > LimitNPROC=infinity
	I0731 21:54:39.064009    9988 command_runner.go:130] > LimitCORE=infinity
	I0731 21:54:39.064009    9988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0731 21:54:39.064009    9988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0731 21:54:39.064009    9988 command_runner.go:130] > TasksMax=infinity
	I0731 21:54:39.064009    9988 command_runner.go:130] > TimeoutStartSec=0
	I0731 21:54:39.064009    9988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0731 21:54:39.064009    9988 command_runner.go:130] > Delegate=yes
	I0731 21:54:39.064009    9988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0731 21:54:39.064009    9988 command_runner.go:130] > KillMode=process
	I0731 21:54:39.064009    9988 command_runner.go:130] > [Install]
	I0731 21:54:39.064009    9988 command_runner.go:130] > WantedBy=multi-user.target
	I0731 21:54:39.077159    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:54:39.123283    9988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:54:39.168110    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:54:39.205840    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 21:54:39.228912    9988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:54:39.266491    9988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0731 21:54:39.278825    9988 ssh_runner.go:195] Run: which cri-dockerd
	I0731 21:54:39.285317    9988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0731 21:54:39.297017    9988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 21:54:39.314910    9988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 21:54:39.358228    9988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 21:54:39.595617    9988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 21:54:39.821143    9988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 21:54:39.821143    9988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 21:54:39.866110    9988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:54:40.106987    9988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 21:54:53.028060    9988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9209128s)
	I0731 21:54:53.040862    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 21:54:53.080419    9988 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 21:54:53.131407    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 21:54:53.166064    9988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 21:54:53.374559    9988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 21:54:53.567442    9988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:54:53.753379    9988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 21:54:53.794083    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 21:54:53.829530    9988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:54:54.020575    9988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 21:54:54.146447    9988 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 21:54:54.162184    9988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 21:54:54.173641    9988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0731 21:54:54.173713    9988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 21:54:54.173713    9988 command_runner.go:130] > Device: 0,22	Inode: 1500        Links: 1
	I0731 21:54:54.173713    9988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0731 21:54:54.173771    9988 command_runner.go:130] > Access: 2024-07-31 21:54:54.076504013 +0000
	I0731 21:54:54.173771    9988 command_runner.go:130] > Modify: 2024-07-31 21:54:54.076504013 +0000
	I0731 21:54:54.173771    9988 command_runner.go:130] > Change: 2024-07-31 21:54:54.079504006 +0000
	I0731 21:54:54.173771    9988 command_runner.go:130] >  Birth: -
	I0731 21:54:54.173837    9988 start.go:563] Will wait 60s for crictl version
	I0731 21:54:54.186319    9988 ssh_runner.go:195] Run: which crictl
	I0731 21:54:54.191648    9988 command_runner.go:130] > /usr/bin/crictl
	I0731 21:54:54.205161    9988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:54:54.257881    9988 command_runner.go:130] > Version:  0.1.0
	I0731 21:54:54.258794    9988 command_runner.go:130] > RuntimeName:  docker
	I0731 21:54:54.258855    9988 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0731 21:54:54.258855    9988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 21:54:54.259854    9988 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 21:54:54.270343    9988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 21:54:54.301989    9988 command_runner.go:130] > 27.1.1
	I0731 21:54:54.310584    9988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 21:54:54.344971    9988 command_runner.go:130] > 27.1.1
	I0731 21:54:54.350793    9988 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 21:54:54.351341    9988 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 21:54:54.356219    9988 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 21:54:54.356219    9988 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 21:54:54.356219    9988 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 21:54:54.356219    9988 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 21:54:54.358982    9988 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 21:54:54.358982    9988 ip.go:210] interface addr: 172.17.16.1/20
	I0731 21:54:54.370609    9988 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 21:54:54.376287    9988 command_runner.go:130] > 172.17.16.1	host.minikube.internal
	I0731 21:54:54.377302    9988 kubeadm.go:883] updating cluster {Name:functional-457100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.3 ClusterName:functional-457100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:54:54.377302    9988 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:54:54.386258    9988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 21:54:54.419382    9988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0731 21:54:54.419382    9988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0731 21:54:54.419382    9988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0731 21:54:54.419382    9988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0731 21:54:54.419382    9988 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0731 21:54:54.419382    9988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:54:54.419504    9988 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0731 21:54:54.419504    9988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:54:54.419504    9988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 21:54:54.419504    9988 docker.go:615] Images already preloaded, skipping extraction
	I0731 21:54:54.429243    9988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 21:54:54.455039    9988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0731 21:54:54.455039    9988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0731 21:54:54.455039    9988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0731 21:54:54.455181    9988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0731 21:54:54.455181    9988 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0731 21:54:54.455181    9988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:54:54.455181    9988 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0731 21:54:54.455181    9988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:54:54.455181    9988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 21:54:54.455181    9988 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:54:54.455181    9988 kubeadm.go:934] updating node { 172.17.30.24 8441 v1.30.3 docker true true} ...
	I0731 21:54:54.455181    9988 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-457100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.30.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-457100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:54:54.464840    9988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 21:54:54.537054    9988 command_runner.go:130] > cgroupfs
	I0731 21:54:54.537536    9988 cni.go:84] Creating CNI manager for ""
	I0731 21:54:54.537604    9988 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:54:54.537708    9988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:54:54.537770    9988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.30.24 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-457100 NodeName:functional-457100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.30.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.30.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:54:54.538081    9988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.30.24
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-457100"
	  kubeletExtraArgs:
	    node-ip: 172.17.30.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.30.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:54:54.548425    9988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:54:54.567449    9988 command_runner.go:130] > kubeadm
	I0731 21:54:54.567449    9988 command_runner.go:130] > kubectl
	I0731 21:54:54.567449    9988 command_runner.go:130] > kubelet
	I0731 21:54:54.567449    9988 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:54:54.579432    9988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:54:54.593981    9988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 21:54:54.624596    9988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:54:54.651345    9988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 21:54:54.694864    9988 ssh_runner.go:195] Run: grep 172.17.30.24	control-plane.minikube.internal$ /etc/hosts
	I0731 21:54:54.701023    9988 command_runner.go:130] > 172.17.30.24	control-plane.minikube.internal
	I0731 21:54:54.713676    9988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:54:54.925281    9988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:54:54.950370    9988 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100 for IP: 172.17.30.24
	I0731 21:54:54.950420    9988 certs.go:194] generating shared ca certs ...
	I0731 21:54:54.950420    9988 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:54:54.951446    9988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 21:54:54.951849    9988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 21:54:54.952075    9988 certs.go:256] generating profile certs ...
	I0731 21:54:54.952893    9988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.key
	I0731 21:54:54.953340    9988 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\apiserver.key.e4590547
	I0731 21:54:54.953759    9988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\proxy-client.key
	I0731 21:54:54.953846    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 21:54:54.953990    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 21:54:54.953990    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 21:54:54.953990    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 21:54:54.953990    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 21:54:54.954521    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 21:54:54.954670    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 21:54:54.954809    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 21:54:54.955273    9988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 21:54:54.955606    9988 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 21:54:54.955606    9988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 21:54:54.955606    9988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 21:54:54.956370    9988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 21:54:54.956535    9988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 21:54:54.957076    9988 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 21:54:54.957507    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:54:54.957649    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 21:54:54.957649    9988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 21:54:54.958993    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:54:54.999177    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:54:55.052657    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:54:55.096147    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:54:55.151948    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:54:55.232506    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:54:55.300496    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:54:55.360800    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:54:55.412076    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:54:55.465645    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 21:54:55.519092    9988 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 21:54:55.574994    9988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:54:55.630919    9988 ssh_runner.go:195] Run: openssl version
	I0731 21:54:55.646260    9988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 21:54:55.659305    9988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:54:55.690135    9988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:54:55.697368    9988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:54:55.697368    9988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:54:55.709296    9988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:54:55.717084    9988 command_runner.go:130] > b5213941
	I0731 21:54:55.729392    9988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:54:55.756921    9988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 21:54:55.799516    9988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 21:54:55.806602    9988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 21:54:55.806602    9988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 21:54:55.816602    9988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 21:54:55.826874    9988 command_runner.go:130] > 51391683
	I0731 21:54:55.841761    9988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 21:54:55.886979    9988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 21:54:55.925339    9988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 21:54:55.937568    9988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 21:54:55.937821    9988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 21:54:55.951764    9988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 21:54:55.965870    9988 command_runner.go:130] > 3ec20f2e
	I0731 21:54:55.979935    9988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:54:56.026633    9988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:54:56.035850    9988 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:54:56.035850    9988 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 21:54:56.035850    9988 command_runner.go:130] > Device: 8,1	Inode: 1055058     Links: 1
	I0731 21:54:56.035941    9988 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 21:54:56.035941    9988 command_runner.go:130] > Access: 2024-07-31 21:52:26.920560753 +0000
	I0731 21:54:56.035941    9988 command_runner.go:130] > Modify: 2024-07-31 21:52:26.920560753 +0000
	I0731 21:54:56.035941    9988 command_runner.go:130] > Change: 2024-07-31 21:52:26.920560753 +0000
	I0731 21:54:56.035941    9988 command_runner.go:130] >  Birth: 2024-07-31 21:52:26.920560753 +0000
	I0731 21:54:56.048237    9988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:54:56.057231    9988 command_runner.go:130] > Certificate will not expire
	I0731 21:54:56.068985    9988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:54:56.076957    9988 command_runner.go:130] > Certificate will not expire
	I0731 21:54:56.088606    9988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:54:56.099865    9988 command_runner.go:130] > Certificate will not expire
	I0731 21:54:56.111893    9988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:54:56.122318    9988 command_runner.go:130] > Certificate will not expire
	I0731 21:54:56.135513    9988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:54:56.143031    9988 command_runner.go:130] > Certificate will not expire
	I0731 21:54:56.158799    9988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:54:56.171190    9988 command_runner.go:130] > Certificate will not expire
	I0731 21:54:56.171659    9988 kubeadm.go:392] StartCluster: {Name:functional-457100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:functional-457100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:54:56.181088    9988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 21:54:56.251428    9988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:54:56.304415    9988 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0731 21:54:56.304415    9988 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0731 21:54:56.304415    9988 command_runner.go:130] > /var/lib/minikube/etcd:
	I0731 21:54:56.304415    9988 command_runner.go:130] > member
	I0731 21:54:56.304415    9988 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:54:56.304415    9988 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:54:56.318064    9988 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:54:56.340471    9988 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:54:56.342489    9988 kubeconfig.go:125] found "functional-457100" server: "https://172.17.30.24:8441"
	I0731 21:54:56.344142    9988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:54:56.344745    9988 kapi.go:59] client config for functional-457100: &rest.Config{Host:"https://172.17.30.24:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-457100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-457100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 21:54:56.346387    9988 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 21:54:56.356905    9988 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:54:56.380592    9988 kubeadm.go:630] The running cluster does not require reconfiguration: 172.17.30.24
	I0731 21:54:56.380592    9988 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:54:56.389350    9988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 21:54:56.460148    9988 command_runner.go:130] > 181a7bb8b9a5
	I0731 21:54:56.461001    9988 command_runner.go:130] > 251a8872b9d7
	I0731 21:54:56.461001    9988 command_runner.go:130] > d1049ec04e6b
	I0731 21:54:56.461001    9988 command_runner.go:130] > f6c70a5cd836
	I0731 21:54:56.461001    9988 command_runner.go:130] > 476c48aee807
	I0731 21:54:56.461001    9988 command_runner.go:130] > e895872468f7
	I0731 21:54:56.461001    9988 command_runner.go:130] > 0c983bd8b69f
	I0731 21:54:56.461001    9988 command_runner.go:130] > 9cc28c900527
	I0731 21:54:56.461001    9988 command_runner.go:130] > 88bc9cae6056
	I0731 21:54:56.461112    9988 command_runner.go:130] > 1fc3088316c0
	I0731 21:54:56.461112    9988 command_runner.go:130] > ca2408f54949
	I0731 21:54:56.461112    9988 command_runner.go:130] > 86a41f57da0b
	I0731 21:54:56.461112    9988 command_runner.go:130] > 06c280a26f16
	I0731 21:54:56.461112    9988 command_runner.go:130] > 1c93bad17003
	I0731 21:54:56.461112    9988 command_runner.go:130] > 8f4a11d770e9
	I0731 21:54:56.461112    9988 command_runner.go:130] > b15ead25b6f0
	I0731 21:54:56.461112    9988 command_runner.go:130] > 5138db35a089
	I0731 21:54:56.461201    9988 command_runner.go:130] > 45f62d68ad15
	I0731 21:54:56.461201    9988 command_runner.go:130] > ca7c2a0fa749
	I0731 21:54:56.461201    9988 command_runner.go:130] > ba2dfdeb46e2
	I0731 21:54:56.461201    9988 command_runner.go:130] > 40bb191cca35
	I0731 21:54:56.461262    9988 docker.go:483] Stopping containers: [181a7bb8b9a5 251a8872b9d7 d1049ec04e6b f6c70a5cd836 476c48aee807 e895872468f7 0c983bd8b69f 9cc28c900527 88bc9cae6056 1fc3088316c0 ca2408f54949 86a41f57da0b 06c280a26f16 1c93bad17003 8f4a11d770e9 b15ead25b6f0 5138db35a089 45f62d68ad15 ca7c2a0fa749 ba2dfdeb46e2 40bb191cca35]
	I0731 21:54:56.471043    9988 ssh_runner.go:195] Run: docker stop 181a7bb8b9a5 251a8872b9d7 d1049ec04e6b f6c70a5cd836 476c48aee807 e895872468f7 0c983bd8b69f 9cc28c900527 88bc9cae6056 1fc3088316c0 ca2408f54949 86a41f57da0b 06c280a26f16 1c93bad17003 8f4a11d770e9 b15ead25b6f0 5138db35a089 45f62d68ad15 ca7c2a0fa749 ba2dfdeb46e2 40bb191cca35
	I0731 21:54:57.172604    9988 command_runner.go:130] > 181a7bb8b9a5
	I0731 21:54:57.173621    9988 command_runner.go:130] > 251a8872b9d7
	I0731 21:54:57.173621    9988 command_runner.go:130] > d1049ec04e6b
	I0731 21:54:57.173673    9988 command_runner.go:130] > f6c70a5cd836
	I0731 21:54:57.173673    9988 command_runner.go:130] > 476c48aee807
	I0731 21:54:57.173673    9988 command_runner.go:130] > e895872468f7
	I0731 21:54:57.173673    9988 command_runner.go:130] > 0c983bd8b69f
	I0731 21:54:57.173673    9988 command_runner.go:130] > 9cc28c900527
	I0731 21:54:57.173673    9988 command_runner.go:130] > 88bc9cae6056
	I0731 21:54:57.173673    9988 command_runner.go:130] > 1fc3088316c0
	I0731 21:54:57.173673    9988 command_runner.go:130] > ca2408f54949
	I0731 21:54:57.173673    9988 command_runner.go:130] > 86a41f57da0b
	I0731 21:54:57.173673    9988 command_runner.go:130] > 06c280a26f16
	I0731 21:54:57.173673    9988 command_runner.go:130] > 1c93bad17003
	I0731 21:54:57.173793    9988 command_runner.go:130] > 8f4a11d770e9
	I0731 21:54:57.173793    9988 command_runner.go:130] > b15ead25b6f0
	I0731 21:54:57.173793    9988 command_runner.go:130] > 5138db35a089
	I0731 21:54:57.173793    9988 command_runner.go:130] > 45f62d68ad15
	I0731 21:54:57.173793    9988 command_runner.go:130] > ca7c2a0fa749
	I0731 21:54:57.173793    9988 command_runner.go:130] > ba2dfdeb46e2
	I0731 21:54:57.173793    9988 command_runner.go:130] > 40bb191cca35
	I0731 21:54:57.189397    9988 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:54:57.285209    9988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:54:57.305019    9988 command_runner.go:130] > -rw------- 1 root root 5647 Jul 31 21:52 /etc/kubernetes/admin.conf
	I0731 21:54:57.305019    9988 command_runner.go:130] > -rw------- 1 root root 5656 Jul 31 21:52 /etc/kubernetes/controller-manager.conf
	I0731 21:54:57.305474    9988 command_runner.go:130] > -rw------- 1 root root 2007 Jul 31 21:52 /etc/kubernetes/kubelet.conf
	I0731 21:54:57.305474    9988 command_runner.go:130] > -rw------- 1 root root 5600 Jul 31 21:52 /etc/kubernetes/scheduler.conf
	I0731 21:54:57.305520    9988 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 31 21:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 31 21:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 31 21:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 31 21:52 /etc/kubernetes/scheduler.conf
	
	I0731 21:54:57.317686    9988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0731 21:54:57.335663    9988 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0731 21:54:57.348300    9988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0731 21:54:57.364197    9988 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0731 21:54:57.375627    9988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0731 21:54:57.395565    9988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:54:57.406319    9988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:54:57.435623    9988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0731 21:54:57.450753    9988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:54:57.462295    9988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:54:57.494218    9988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:54:57.511452    9988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:54:57.589378    9988 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:54:57.589441    9988 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0731 21:54:57.589480    9988 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0731 21:54:57.589480    9988 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:54:57.589549    9988 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0731 21:54:57.589601    9988 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:54:57.589601    9988 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0731 21:54:57.589667    9988 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0731 21:54:57.589667    9988 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:54:57.589667    9988 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:54:57.589667    9988 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:54:57.589667    9988 command_runner.go:130] > [certs] Using the existing "sa" key
	I0731 21:54:57.589761    9988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:54:58.573917    9988 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:54:58.573917    9988 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0731 21:54:58.573917    9988 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0731 21:54:58.573917    9988 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0731 21:54:58.573917    9988 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:54:58.573917    9988 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:54:58.573917    9988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:54:58.855960    9988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:54:58.855960    9988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:54:58.855960    9988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 21:54:58.855960    9988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:54:58.943385    9988 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:54:58.943430    9988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:54:58.943430    9988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:54:58.943430    9988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:54:58.943606    9988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:54:59.051458    9988 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:54:59.051593    9988 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:54:59.065231    9988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:54:59.571535    9988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:55:00.077533    9988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:55:00.568125    9988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:55:01.074087    9988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:55:01.097088    9988 command_runner.go:130] > 5883
	I0731 21:55:01.097088    9988 api_server.go:72] duration metric: took 2.0454696s to wait for apiserver process to appear ...
	I0731 21:55:01.097088    9988 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:55:01.097088    9988 api_server.go:253] Checking apiserver healthz at https://172.17.30.24:8441/healthz ...
	I0731 21:55:03.574980    9988 api_server.go:279] https://172.17.30.24:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:55:03.574980    9988 api_server.go:103] status: https://172.17.30.24:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:55:03.574980    9988 api_server.go:253] Checking apiserver healthz at https://172.17.30.24:8441/healthz ...
	I0731 21:55:03.642392    9988 api_server.go:279] https://172.17.30.24:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:55:03.642392    9988 api_server.go:103] status: https://172.17.30.24:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:55:03.642392    9988 api_server.go:253] Checking apiserver healthz at https://172.17.30.24:8441/healthz ...
	I0731 21:55:03.677237    9988 api_server.go:279] https://172.17.30.24:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:55:03.677237    9988 api_server.go:103] status: https://172.17.30.24:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:55:04.111002    9988 api_server.go:253] Checking apiserver healthz at https://172.17.30.24:8441/healthz ...
	I0731 21:55:04.122048    9988 api_server.go:279] https://172.17.30.24:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:55:04.122095    9988 api_server.go:103] status: https://172.17.30.24:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:55:04.598388    9988 api_server.go:253] Checking apiserver healthz at https://172.17.30.24:8441/healthz ...
	I0731 21:55:04.610968    9988 api_server.go:279] https://172.17.30.24:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:55:04.611037    9988 api_server.go:103] status: https://172.17.30.24:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:55:05.106704    9988 api_server.go:253] Checking apiserver healthz at https://172.17.30.24:8441/healthz ...
	I0731 21:55:05.114686    9988 api_server.go:279] https://172.17.30.24:8441/healthz returned 200:
	ok
	I0731 21:55:05.114960    9988 round_trippers.go:463] GET https://172.17.30.24:8441/version
	I0731 21:55:05.114960    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:05.114960    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:05.114960    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:05.127711    9988 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 21:55:05.127711    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:05.127711    9988 round_trippers.go:580]     Audit-Id: 030d2c0e-9d0c-4a74-9c81-32b815c2bb1d
	I0731 21:55:05.127711    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:05.127711    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:05.127711    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:05.127711    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:05.127711    9988 round_trippers.go:580]     Content-Length: 263
	I0731 21:55:05.127711    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:05 GMT
	I0731 21:55:05.128249    9988 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0731 21:55:05.128345    9988 api_server.go:141] control plane version: v1.30.3
	I0731 21:55:05.128465    9988 api_server.go:131] duration metric: took 4.031327s to wait for apiserver health ...
	I0731 21:55:05.128494    9988 cni.go:84] Creating CNI manager for ""
	I0731 21:55:05.128559    9988 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:55:05.133769    9988 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:55:05.149873    9988 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:55:05.178009    9988 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:55:05.260233    9988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:55:05.260233    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods
	I0731 21:55:05.260233    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:05.260233    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:05.260233    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:05.270663    9988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 21:55:05.270663    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:05.270663    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:05.270663    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:05.270663    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:05.270663    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:05.270663    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:05 GMT
	I0731 21:55:05.270663    9988 round_trippers.go:580]     Audit-Id: c2e7c922-b223-4450-ade1-6cd53f04a3ef
	I0731 21:55:05.273670    9988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"498"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"497","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51612 chars]
	I0731 21:55:05.278777    9988 system_pods.go:59] 7 kube-system pods found
	I0731 21:55:05.278894    9988 system_pods.go:61] "coredns-7db6d8ff4d-2mpwg" [ee5651dc-9d65-4da3-82eb-2f60a206d462] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:55:05.278894    9988 system_pods.go:61] "etcd-functional-457100" [85862e21-0968-4d14-82d7-c68dbebdd097] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:55:05.278894    9988 system_pods.go:61] "kube-apiserver-functional-457100" [526f4a67-6723-4cbf-a5c9-54d26df05040] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:55:05.278894    9988 system_pods.go:61] "kube-controller-manager-functional-457100" [5c261c1f-4da2-45d9-b196-8a188fa8d675] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:55:05.278894    9988 system_pods.go:61] "kube-proxy-qv82r" [d0bc1e99-23c4-4cba-8243-a17778aa26d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:55:05.278894    9988 system_pods.go:61] "kube-scheduler-functional-457100" [90906408-1ad9-4b63-b67c-aa8e9aeb57f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:55:05.278894    9988 system_pods.go:61] "storage-provisioner" [ea03b13b-e26e-40a4-a87d-83eca7cf8355] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:55:05.278894    9988 system_pods.go:74] duration metric: took 18.6606ms to wait for pod list to return data ...
	I0731 21:55:05.278894    9988 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:55:05.278894    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes
	I0731 21:55:05.278894    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:05.278894    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:05.278894    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:05.296413    9988 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0731 21:55:05.296413    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:05.296413    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:05.296413    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:05 GMT
	I0731 21:55:05.296413    9988 round_trippers.go:580]     Audit-Id: 1ffa39cb-1b17-41e1-b65f-3a57ae4008c6
	I0731 21:55:05.296413    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:05.296413    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:05.296413    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:05.297302    9988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"498"},"items":[{"metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4839 chars]
	I0731 21:55:05.298777    9988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:55:05.298951    9988 node_conditions.go:123] node cpu capacity is 2
	I0731 21:55:05.298951    9988 node_conditions.go:105] duration metric: took 20.0576ms to run NodePressure ...
	I0731 21:55:05.298951    9988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:55:05.879493    9988 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0731 21:55:05.879493    9988 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0731 21:55:05.879493    9988 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:55:05.879493    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0731 21:55:05.879493    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:05.879493    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:05.879493    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:05.883939    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:05.883939    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:05.884899    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:05.884899    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:05.884899    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:05.884899    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:05 GMT
	I0731 21:55:05.884899    9988 round_trippers.go:580]     Audit-Id: 9521f116-63d3-47ec-87fd-6ceee7d1be2a
	I0731 21:55:05.884899    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:05.885690    9988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"501"},"items":[{"metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30920 chars]
	I0731 21:55:05.887294    9988 kubeadm.go:739] kubelet initialised
	I0731 21:55:05.887294    9988 kubeadm.go:740] duration metric: took 7.8011ms waiting for restarted kubelet to initialise ...
	I0731 21:55:05.887377    9988 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:55:05.887435    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods
	I0731 21:55:05.887507    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:05.887541    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:05.887541    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:05.891223    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:05.891308    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:05.891308    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:05.891308    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:05.891308    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:05 GMT
	I0731 21:55:05.891308    9988 round_trippers.go:580]     Audit-Id: 5bbf03dd-6f9a-436f-9586-67e7389b33a7
	I0731 21:55:05.891308    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:05.891308    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:05.892395    9988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"501"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"497","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51612 chars]
	I0731 21:55:05.895700    9988 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2mpwg" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:05.895797    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:05.895878    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:05.895878    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:05.895878    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:05.897805    9988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 21:55:05.897805    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:05.897805    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:05 GMT
	I0731 21:55:05.897805    9988 round_trippers.go:580]     Audit-Id: b0d94f92-34fb-4293-a68e-de95ba631bc1
	I0731 21:55:05.897805    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:05.897805    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:05.897805    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:05.897805    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:05.898837    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"497","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6637 chars]
	I0731 21:55:05.898837    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:05.898837    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:05.898837    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:05.898837    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:05.901812    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:05.902827    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:05.902827    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:05.902827    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:05.902827    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:05.902827    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:05 GMT
	I0731 21:55:05.902827    9988 round_trippers.go:580]     Audit-Id: 79f68a4e-5bcb-4df2-b738-48565948f3fa
	I0731 21:55:05.902827    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:05.902827    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:06.399720    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:06.399720    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:06.399720    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:06.399720    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:06.402648    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:06.402648    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:06.402648    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:06 GMT
	I0731 21:55:06.402648    9988 round_trippers.go:580]     Audit-Id: e431a3da-7f8d-4da3-a3e0-4522d9ca8d8f
	I0731 21:55:06.402648    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:06.402648    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:06.402648    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:06.402648    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:06.403905    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:06.404577    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:06.404577    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:06.404577    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:06.404577    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:06.408477    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:06.408477    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:06.408549    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:06.408549    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:06.408549    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:06 GMT
	I0731 21:55:06.408583    9988 round_trippers.go:580]     Audit-Id: c76ec7e9-b41b-4620-843c-977d2e53a17e
	I0731 21:55:06.408583    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:06.408583    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:06.408949    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:06.897730    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:06.897891    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:06.897891    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:06.897994    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:06.902452    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:06.902931    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:06.902931    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:06.902931    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:06.902931    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:06.902931    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:06.902931    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:06 GMT
	I0731 21:55:06.902931    9988 round_trippers.go:580]     Audit-Id: 7577d0e8-e3a1-4520-9a6e-bb53e415c740
	I0731 21:55:06.903275    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:06.904312    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:06.904410    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:06.904410    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:06.904410    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:06.908254    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:06.908254    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:06.908254    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:06 GMT
	I0731 21:55:06.908254    9988 round_trippers.go:580]     Audit-Id: eaec47b0-737e-462c-bd20-232675a64a8b
	I0731 21:55:06.908254    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:06.908254    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:06.908254    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:06.908254    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:06.908816    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:07.411113    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:07.411113    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:07.411218    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:07.411218    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:07.415001    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:07.415494    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:07.415494    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:07.415494    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:07.415494    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:07.415494    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:07 GMT
	I0731 21:55:07.415494    9988 round_trippers.go:580]     Audit-Id: ca31e583-4f5c-4c29-b68a-f4a05c544750
	I0731 21:55:07.415494    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:07.415649    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:07.416586    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:07.416656    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:07.416656    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:07.416656    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:07.420085    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:07.420167    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:07.420167    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:07 GMT
	I0731 21:55:07.420167    9988 round_trippers.go:580]     Audit-Id: 19f99629-9fdf-4aca-a462-4785b3968944
	I0731 21:55:07.420167    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:07.420167    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:07.420167    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:07.420167    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:07.420323    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:07.911979    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:07.912146    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:07.912146    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:07.912146    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:07.914469    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:07.914469    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:07.915439    9988 round_trippers.go:580]     Audit-Id: e9150e5a-3560-4008-bca5-3cf07a3bd1fb
	I0731 21:55:07.915439    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:07.915439    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:07.915439    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:07.915439    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:07.915439    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:07 GMT
	I0731 21:55:07.915587    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:07.916479    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:07.916638    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:07.916638    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:07.916638    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:07.919967    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:07.919967    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:07.919967    9988 round_trippers.go:580]     Audit-Id: d22649af-d4fe-40aa-bce1-76c8727a9a7e
	I0731 21:55:07.919967    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:07.919967    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:07.919967    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:07.919967    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:07.919967    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:07 GMT
	I0731 21:55:07.921142    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:07.921784    9988 pod_ready.go:102] pod "coredns-7db6d8ff4d-2mpwg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:55:08.397144    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:08.397144    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:08.397144    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:08.397144    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:08.401983    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:08.401983    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:08.402198    9988 round_trippers.go:580]     Audit-Id: 0941c615-4da2-48e6-9137-ea8135e04bcb
	I0731 21:55:08.402198    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:08.402198    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:08.402198    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:08.402198    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:08.402198    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:08 GMT
	I0731 21:55:08.402867    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:08.403882    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:08.403882    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:08.403949    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:08.403949    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:08.406297    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:08.406297    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:08.406297    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:08.406297    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:08.406297    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:08.407309    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:08.407309    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:08 GMT
	I0731 21:55:08.407309    9988 round_trippers.go:580]     Audit-Id: f5d5218a-fb3d-40b3-9238-29be9df90c4e
	I0731 21:55:08.407523    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:08.910152    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:08.910452    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:08.910452    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:08.910452    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:08.913874    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:08.914891    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:08.914891    9988 round_trippers.go:580]     Audit-Id: 0cbe3483-ee89-4335-84d2-5c34a4a9914b
	I0731 21:55:08.914891    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:08.914891    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:08.914952    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:08.914952    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:08.914952    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:08 GMT
	I0731 21:55:08.915100    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:08.915861    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:08.915929    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:08.915929    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:08.915929    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:08.918153    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:08.918153    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:08.918153    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:08.918153    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:08.918153    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:08 GMT
	I0731 21:55:08.918153    9988 round_trippers.go:580]     Audit-Id: adf13fa8-8bfa-4a96-80b6-75c2fc77a8da
	I0731 21:55:08.918153    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:08.918153    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:08.919274    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:09.397183    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:09.397512    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:09.397512    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:09.397554    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:09.401405    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:09.401405    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:09.401405    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:09 GMT
	I0731 21:55:09.401405    9988 round_trippers.go:580]     Audit-Id: 34e4d36d-ccfc-4af7-9261-f90275982405
	I0731 21:55:09.401405    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:09.401405    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:09.401405    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:09.401405    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:09.401405    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:09.402276    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:09.402346    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:09.402346    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:09.402346    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:09.405457    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:09.405526    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:09.405526    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:09.405526    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:09.405526    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:09.405526    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:09 GMT
	I0731 21:55:09.405526    9988 round_trippers.go:580]     Audit-Id: 87798026-9224-4689-87e7-a9ad316899d3
	I0731 21:55:09.405526    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:09.405926    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:09.897893    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:09.897893    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:09.897972    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:09.897972    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:09.901697    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:09.901697    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:09.901697    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:09.902278    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:09.902278    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:09.902278    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:09.902278    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:09 GMT
	I0731 21:55:09.902278    9988 round_trippers.go:580]     Audit-Id: 997091c8-dec5-4bee-9d1d-43835ff78511
	I0731 21:55:09.902568    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:09.902733    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:09.903321    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:09.903321    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:09.903321    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:09.906687    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:09.906836    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:09.906836    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:09 GMT
	I0731 21:55:09.906836    9988 round_trippers.go:580]     Audit-Id: 540055af-0c15-47fc-b34c-25fc15c537a8
	I0731 21:55:09.906836    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:09.906836    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:09.906836    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:09.906836    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:09.906970    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:10.398505    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:10.398505    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:10.398763    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:10.398763    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:10.401079    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:10.401079    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:10.401079    9988 round_trippers.go:580]     Audit-Id: 3d6777a6-c563-415c-a0fd-d50b98b411a3
	I0731 21:55:10.401079    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:10.401079    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:10.401636    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:10.401636    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:10.401636    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:10 GMT
	I0731 21:55:10.401809    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"502","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6813 chars]
	I0731 21:55:10.402558    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:10.402637    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:10.402637    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:10.402637    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:10.417032    9988 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 21:55:10.417032    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:10.417032    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:10.417032    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:10.417032    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:10 GMT
	I0731 21:55:10.417032    9988 round_trippers.go:580]     Audit-Id: 8e13df3e-364a-4df6-83e6-b3a056667de8
	I0731 21:55:10.417032    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:10.417032    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:10.418832    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:10.419290    9988 pod_ready.go:102] pod "coredns-7db6d8ff4d-2mpwg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:55:10.904911    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:10.905210    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:10.905210    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:10.905210    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:10.910414    9988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 21:55:10.910414    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:10.910414    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:10.910414    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:10.910414    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:10.910414    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:10 GMT
	I0731 21:55:10.910414    9988 round_trippers.go:580]     Audit-Id: f56478dc-0a91-4676-bbc6-25006604750d
	I0731 21:55:10.910414    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:10.910810    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"505","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6584 chars]
	I0731 21:55:10.911344    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:10.911344    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:10.911344    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:10.911344    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:10.915967    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:10.915967    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:10.915967    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:10.915967    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:10.915967    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:10.915967    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:10.915967    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:10 GMT
	I0731 21:55:10.915967    9988 round_trippers.go:580]     Audit-Id: 16d61da9-e832-4de9-b5f3-044c5eccc756
	I0731 21:55:10.915967    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:10.915967    9988 pod_ready.go:92] pod "coredns-7db6d8ff4d-2mpwg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:10.915967    9988 pod_ready.go:81] duration metric: took 5.0202042s for pod "coredns-7db6d8ff4d-2mpwg" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:10.915967    9988 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:10.917225    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:10.917422    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:10.917422    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:10.917422    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:10.919272    9988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 21:55:10.920342    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:10.920375    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:10.920375    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:10.920375    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:10 GMT
	I0731 21:55:10.920375    9988 round_trippers.go:580]     Audit-Id: eb9b4d2f-2e1d-49ac-9cfe-e5f67c7c296b
	I0731 21:55:10.920375    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:10.920375    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:10.920694    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:10.921260    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:10.921260    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:10.921260    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:10.921260    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:10.923333    9988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 21:55:10.923773    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:10.923773    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:10.923815    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:10 GMT
	I0731 21:55:10.923815    9988 round_trippers.go:580]     Audit-Id: 4ef4f284-56e1-488b-8c3b-38ad33ac677b
	I0731 21:55:10.923815    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:10.923815    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:10.923815    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:10.924193    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:11.420154    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:11.420325    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:11.420325    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:11.420325    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:11.423925    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:11.423925    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:11.424854    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:11.424854    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:11.424854    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:11.424854    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:11.424854    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:11 GMT
	I0731 21:55:11.424854    9988 round_trippers.go:580]     Audit-Id: 7160ca6e-d8fd-4b12-a24c-30c4e6eb7a64
	I0731 21:55:11.425097    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:11.425974    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:11.425974    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:11.426034    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:11.426034    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:11.428306    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:11.428634    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:11.428634    9988 round_trippers.go:580]     Audit-Id: 39e30619-adce-4156-82ab-700d151231fa
	I0731 21:55:11.428634    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:11.428634    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:11.428634    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:11.428634    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:11.428634    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:11 GMT
	I0731 21:55:11.429104    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:11.932847    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:11.932917    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:11.932917    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:11.932917    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:11.936749    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:11.936749    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:11.936749    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:11.936749    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:11.936749    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:11.936749    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:11 GMT
	I0731 21:55:11.937090    9988 round_trippers.go:580]     Audit-Id: 39d809cb-0b55-4e9c-b8a6-6017e08d09ae
	I0731 21:55:11.937090    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:11.937308    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:11.938199    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:11.938199    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:11.938199    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:11.938320    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:11.941600    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:11.941600    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:11.941600    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:11.942071    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:11 GMT
	I0731 21:55:11.942071    9988 round_trippers.go:580]     Audit-Id: 16c29c29-b315-415a-b21a-3eb198a89254
	I0731 21:55:11.942071    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:11.942071    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:11.942071    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:11.942423    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:12.431848    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:12.431848    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:12.431848    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:12.431848    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:12.436047    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:12.436124    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:12.436124    9988 round_trippers.go:580]     Audit-Id: c5396107-6e63-425c-975d-d0b40b7ddca6
	I0731 21:55:12.436124    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:12.436124    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:12.436124    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:12.436124    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:12.436124    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:12 GMT
	I0731 21:55:12.436357    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:12.436940    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:12.436940    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:12.436940    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:12.436940    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:12.439944    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:12.439944    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:12.440389    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:12.440389    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:12.440389    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:12.440389    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:12.440389    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:12 GMT
	I0731 21:55:12.440389    9988 round_trippers.go:580]     Audit-Id: a1c148d8-dcf2-477a-a5a1-1b75b347f974
	I0731 21:55:12.440840    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:12.930090    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:12.930277    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:12.930277    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:12.930277    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:12.933585    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:12.933585    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:12.933585    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:12.933585    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:12.933585    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:12.933585    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:12 GMT
	I0731 21:55:12.933585    9988 round_trippers.go:580]     Audit-Id: 61b630aa-7bcf-4537-bfa6-c8b5d947cf49
	I0731 21:55:12.933585    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:12.934834    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:12.935289    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:12.935289    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:12.935289    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:12.935289    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:12.938491    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:12.938491    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:12.938491    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:12.938491    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:12 GMT
	I0731 21:55:12.938491    9988 round_trippers.go:580]     Audit-Id: 589ab4e3-f7a9-4d1a-bb2e-aa66ffec5231
	I0731 21:55:12.938491    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:12.938491    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:12.938491    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:12.939258    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:12.939667    9988 pod_ready.go:102] pod "etcd-functional-457100" in "kube-system" namespace has status "Ready":"False"
	I0731 21:55:13.429191    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:13.429191    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:13.429191    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:13.429191    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:13.432854    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:13.433001    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:13.433001    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:13.433001    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:13.433001    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:13.433001    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:13 GMT
	I0731 21:55:13.433001    9988 round_trippers.go:580]     Audit-Id: 956669f7-d7e1-46aa-ae84-2415707ddc23
	I0731 21:55:13.433001    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:13.434375    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:13.435203    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:13.435203    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:13.435278    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:13.435278    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:13.437559    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:13.437559    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:13.437559    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:13.437559    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:13.437559    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:13.437559    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:13.437559    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:13 GMT
	I0731 21:55:13.437559    9988 round_trippers.go:580]     Audit-Id: 32823a9a-54bc-4564-8223-aa90d214b5fc
	I0731 21:55:13.438015    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:13.928904    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:13.929450    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:13.929450    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:13.929450    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:13.932903    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:13.933937    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:13.933937    9988 round_trippers.go:580]     Audit-Id: 9a6fc6d1-d537-4284-ab19-6371e204688a
	I0731 21:55:13.933937    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:13.933937    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:13.933937    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:13.933937    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:13.933937    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:13 GMT
	I0731 21:55:13.934039    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:13.934782    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:13.934878    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:13.934878    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:13.934878    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:13.937039    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:13.937039    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:13.937039    9988 round_trippers.go:580]     Audit-Id: 429a8d5a-c734-4bfa-a4f9-dcfbc5869b2a
	I0731 21:55:13.937704    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:13.937704    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:13.937704    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:13.937704    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:13.937704    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:13 GMT
	I0731 21:55:13.938029    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:14.430427    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:14.430521    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:14.430521    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:14.430521    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:14.436508    9988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 21:55:14.436508    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:14.436508    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:14.436508    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:14.436508    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:14.436508    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:14 GMT
	I0731 21:55:14.436508    9988 round_trippers.go:580]     Audit-Id: 0455380b-e51f-44fc-8b65-dc8d0e29cc5a
	I0731 21:55:14.436508    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:14.436508    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:14.437306    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:14.437306    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:14.437306    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:14.437306    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:14.439914    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:14.440494    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:14.440537    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:14.440537    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:14 GMT
	I0731 21:55:14.440537    9988 round_trippers.go:580]     Audit-Id: 6d849019-d1e7-49c3-906c-57e54b314320
	I0731 21:55:14.440537    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:14.440537    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:14.440537    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:14.441194    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:14.929227    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:14.929334    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:14.929334    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:14.929334    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:14.933286    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:14.933286    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:14.933286    9988 round_trippers.go:580]     Audit-Id: 32e2a5f3-de75-47ba-ab40-2bffbee55e47
	I0731 21:55:14.933876    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:14.933876    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:14.933876    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:14.933876    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:14.933876    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:14 GMT
	I0731 21:55:14.934069    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:14.934465    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:14.934465    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:14.934465    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:14.934465    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:14.937141    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:14.937956    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:14.937956    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:14.937956    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:14 GMT
	I0731 21:55:14.937956    9988 round_trippers.go:580]     Audit-Id: 98a9f7cd-f05b-4757-80b5-72be6f22440b
	I0731 21:55:14.937956    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:14.937956    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:14.937956    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:14.938282    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:15.430588    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:15.430588    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:15.430694    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:15.430694    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:15.435007    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:15.435007    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:15.435007    9988 round_trippers.go:580]     Audit-Id: 4cd4c33f-9e99-40b5-bc17-c5daa55f6904
	I0731 21:55:15.435007    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:15.435007    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:15.435007    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:15.435007    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:15.435007    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:15 GMT
	I0731 21:55:15.436134    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:15.436699    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:15.436699    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:15.436699    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:15.436699    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:15.439270    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:15.439766    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:15.439766    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:15.439766    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:15.439844    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:15 GMT
	I0731 21:55:15.439872    9988 round_trippers.go:580]     Audit-Id: fd89323d-7bd7-4b00-b6d2-4524bd6b417c
	I0731 21:55:15.439908    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:15.440102    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:15.440491    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:15.440608    9988 pod_ready.go:102] pod "etcd-functional-457100" in "kube-system" namespace has status "Ready":"False"
	I0731 21:55:15.931136    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:15.931136    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:15.931136    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:15.931136    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:15.934744    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:15.935177    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:15.935177    9988 round_trippers.go:580]     Audit-Id: a180de6c-a10c-4ecd-857f-acc936aaf75f
	I0731 21:55:15.935177    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:15.935177    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:15.935177    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:15.935177    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:15.935177    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:15 GMT
	I0731 21:55:15.935478    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"492","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6582 chars]
	I0731 21:55:15.935878    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:15.935878    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:15.935878    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:15.935878    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:15.938504    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:15.939407    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:15.939407    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:15.939407    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:15.939407    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:15.939407    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:15.939407    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:15 GMT
	I0731 21:55:15.939407    9988 round_trippers.go:580]     Audit-Id: 0351a3c4-c5f7-456c-9560-12edfb30754a
	I0731 21:55:15.939717    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:16.429390    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:16.429390    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:16.429390    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:16.429481    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:16.431768    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:16.432567    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:16.432567    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:16 GMT
	I0731 21:55:16.432567    9988 round_trippers.go:580]     Audit-Id: c3d4e4cb-25d2-4ccd-82ed-c8d2a18af987
	I0731 21:55:16.432567    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:16.432567    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:16.432659    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:16.432659    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:16.433015    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"563","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6358 chars]
	I0731 21:55:16.433753    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:16.433802    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:16.433802    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:16.433802    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:16.436411    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:16.436483    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:16.436483    9988 round_trippers.go:580]     Audit-Id: 2b2a18c5-d0f4-4ffd-9c75-a6aca7234dca
	I0731 21:55:16.436483    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:16.436483    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:16.436483    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:16.436483    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:16.436483    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:16 GMT
	I0731 21:55:16.436788    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:16.436788    9988 pod_ready.go:92] pod "etcd-functional-457100" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:16.436788    9988 pod_ready.go:81] duration metric: took 5.5207524s for pod "etcd-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:16.436788    9988 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:16.437358    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100
	I0731 21:55:16.437358    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:16.437358    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:16.437358    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:16.440039    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:16.440228    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:16.440228    9988 round_trippers.go:580]     Audit-Id: bb931b4c-639a-49c0-85e6-fe8821ebb09d
	I0731 21:55:16.440228    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:16.440228    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:16.440228    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:16.440228    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:16.440228    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:16 GMT
	I0731 21:55:16.440228    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-457100","namespace":"kube-system","uid":"526f4a67-6723-4cbf-a5c9-54d26df05040","resourceVersion":"495","creationTimestamp":"2024-07-31T21:52:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.30.24:8441","kubernetes.io/config.hash":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.mirror":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.seen":"2024-07-31T21:52:30.221230253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8136 chars]
	I0731 21:55:16.441098    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:16.441098    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:16.441098    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:16.441098    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:16.444684    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:16.444684    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:16.444895    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:16.444895    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:16.444895    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:16.444895    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:16 GMT
	I0731 21:55:16.444895    9988 round_trippers.go:580]     Audit-Id: bab9f450-e2eb-46d3-8247-ff4fc5457289
	I0731 21:55:16.444895    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:16.445095    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:16.944462    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100
	I0731 21:55:16.944462    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:16.944462    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:16.944462    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:16.948856    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:16.948856    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:16.948856    9988 round_trippers.go:580]     Audit-Id: d3fe29ed-031e-42f3-8411-ccfb0e242912
	I0731 21:55:16.948856    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:16.948856    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:16.948856    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:16.948856    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:16.948856    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:16 GMT
	I0731 21:55:16.949396    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-457100","namespace":"kube-system","uid":"526f4a67-6723-4cbf-a5c9-54d26df05040","resourceVersion":"495","creationTimestamp":"2024-07-31T21:52:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.30.24:8441","kubernetes.io/config.hash":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.mirror":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.seen":"2024-07-31T21:52:30.221230253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8136 chars]
	I0731 21:55:16.950193    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:16.950310    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:16.950310    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:16.950310    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:16.954670    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:16.954670    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:16.954670    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:16.954670    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:16 GMT
	I0731 21:55:16.954670    9988 round_trippers.go:580]     Audit-Id: 834cc3a4-a18d-4c3f-8fb4-6311844fb9b4
	I0731 21:55:16.954670    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:16.954670    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:16.954670    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:16.955580    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:17.440889    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100
	I0731 21:55:17.440889    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:17.440889    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:17.440992    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:17.444410    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:17.444558    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:17.444558    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:17.444558    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:17.444558    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:17.444640    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:17.444640    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:17 GMT
	I0731 21:55:17.444640    9988 round_trippers.go:580]     Audit-Id: 35a96c07-8a5e-4f63-a8bc-76cea70095a0
	I0731 21:55:17.444915    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-457100","namespace":"kube-system","uid":"526f4a67-6723-4cbf-a5c9-54d26df05040","resourceVersion":"495","creationTimestamp":"2024-07-31T21:52:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.30.24:8441","kubernetes.io/config.hash":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.mirror":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.seen":"2024-07-31T21:52:30.221230253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8136 chars]
	I0731 21:55:17.445752    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:17.445752    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:17.445806    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:17.445806    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:17.451108    9988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 21:55:17.451108    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:17.451108    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:17.451108    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:17 GMT
	I0731 21:55:17.451108    9988 round_trippers.go:580]     Audit-Id: f5a0c9a2-a8fa-4127-a2ca-7b0ebaa2aafe
	I0731 21:55:17.451108    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:17.451108    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:17.451525    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:17.451725    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:17.943846    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100
	I0731 21:55:17.943846    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:17.943846    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:17.943846    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:17.947655    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:17.947655    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:17.947655    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:17 GMT
	I0731 21:55:17.947721    9988 round_trippers.go:580]     Audit-Id: f88df39f-b70f-49a0-8e98-8f7f48310fe3
	I0731 21:55:17.947721    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:17.947721    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:17.947721    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:17.947721    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:17.948511    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-457100","namespace":"kube-system","uid":"526f4a67-6723-4cbf-a5c9-54d26df05040","resourceVersion":"495","creationTimestamp":"2024-07-31T21:52:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.30.24:8441","kubernetes.io/config.hash":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.mirror":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.seen":"2024-07-31T21:52:30.221230253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8136 chars]
	I0731 21:55:17.949390    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:17.949390    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:17.949390    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:17.949390    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:17.953002    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:17.953002    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:17.953002    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:17.953002    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:17.953002    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:17.953188    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:17 GMT
	I0731 21:55:17.953188    9988 round_trippers.go:580]     Audit-Id: 957cc5e2-db59-4b7e-a4c5-97f8c5523590
	I0731 21:55:17.953188    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:17.953242    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:18.444072    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100
	I0731 21:55:18.444149    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.444149    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.444149    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.459634    9988 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 21:55:18.459634    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.460469    9988 round_trippers.go:580]     Audit-Id: 7e531a39-9a2d-4093-996a-bfebb269b049
	I0731 21:55:18.460469    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.460469    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.460556    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.460556    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.460556    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.460939    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-457100","namespace":"kube-system","uid":"526f4a67-6723-4cbf-a5c9-54d26df05040","resourceVersion":"495","creationTimestamp":"2024-07-31T21:52:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.30.24:8441","kubernetes.io/config.hash":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.mirror":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.seen":"2024-07-31T21:52:30.221230253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8136 chars]
	I0731 21:55:18.461697    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:18.461774    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.461774    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.461774    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.465213    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:18.465823    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.465823    9988 round_trippers.go:580]     Audit-Id: 67b0a91c-ff0e-4531-a4c9-b40f06cc9f3c
	I0731 21:55:18.465823    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.465823    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.465890    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.465890    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.465890    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.466135    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:18.466602    9988 pod_ready.go:102] pod "kube-apiserver-functional-457100" in "kube-system" namespace has status "Ready":"False"
	I0731 21:55:18.943289    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100
	I0731 21:55:18.943289    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.943371    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.943371    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.945958    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:18.946721    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.946721    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.946721    9988 round_trippers.go:580]     Audit-Id: 4696ac77-6097-428c-8aca-863b1ad1c462
	I0731 21:55:18.946721    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.946721    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.946721    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.946721    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.947010    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-457100","namespace":"kube-system","uid":"526f4a67-6723-4cbf-a5c9-54d26df05040","resourceVersion":"571","creationTimestamp":"2024-07-31T21:52:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.30.24:8441","kubernetes.io/config.hash":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.mirror":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.seen":"2024-07-31T21:52:30.221230253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7892 chars]
	I0731 21:55:18.947936    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:18.947936    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.947936    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.947936    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.950713    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:18.951092    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.951092    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.951092    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.951092    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.951092    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.951092    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.951092    9988 round_trippers.go:580]     Audit-Id: 93fbf1e1-cb6c-4e2c-87bc-43099ebaa5b8
	I0731 21:55:18.951383    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:18.952144    9988 pod_ready.go:92] pod "kube-apiserver-functional-457100" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:18.952144    9988 pod_ready.go:81] duration metric: took 2.5147545s for pod "kube-apiserver-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:18.952144    9988 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:18.952144    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-457100
	I0731 21:55:18.952144    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.952144    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.952144    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.955008    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:18.955008    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.955008    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.955008    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.955008    9988 round_trippers.go:580]     Audit-Id: 12f0bf31-2170-4cc3-90c1-ca1ac50df2eb
	I0731 21:55:18.955008    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.955008    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.955008    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.955620    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-457100","namespace":"kube-system","uid":"5c261c1f-4da2-45d9-b196-8a188fa8d675","resourceVersion":"564","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5ced719098a864e0088b1886072b76d2","kubernetes.io/config.mirror":"5ced719098a864e0088b1886072b76d2","kubernetes.io/config.seen":"2024-07-31T21:52:37.406429033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7467 chars]
	I0731 21:55:18.956268    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:18.956355    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.956355    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.956355    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.957691    9988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 21:55:18.958707    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.958707    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.958707    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.958763    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.958763    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.958763    9988 round_trippers.go:580]     Audit-Id: 115eb48d-cb5c-47fe-8369-a42c01e73746
	I0731 21:55:18.958763    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.958763    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:18.958763    9988 pod_ready.go:92] pod "kube-controller-manager-functional-457100" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:18.958763    9988 pod_ready.go:81] duration metric: took 6.6186ms for pod "kube-controller-manager-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:18.958763    9988 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qv82r" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:18.959470    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-proxy-qv82r
	I0731 21:55:18.959470    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.959470    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.959470    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.964217    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:18.964217    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.964217    9988 round_trippers.go:580]     Audit-Id: af87753d-9f1c-4682-ab99-247af119cd07
	I0731 21:55:18.964217    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.964217    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.964217    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.964217    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.964217    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.964830    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv82r","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0bc1e99-23c4-4cba-8243-a17778aa26d0","resourceVersion":"504","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5c9411b8-c49d-4533-ab20-236608604d78","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5c9411b8-c49d-4533-ab20-236608604d78\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6030 chars]
	I0731 21:55:18.965090    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:18.965638    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.965638    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.965638    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.967452    9988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 21:55:18.967452    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.967452    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.967452    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.967452    9988 round_trippers.go:580]     Audit-Id: 80e60a3f-4337-4dcf-bb31-abc84fcd1967
	I0731 21:55:18.967452    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.967452    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.967452    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.967452    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:18.967452    9988 pod_ready.go:92] pod "kube-proxy-qv82r" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:18.967452    9988 pod_ready.go:81] duration metric: took 8.6887ms for pod "kube-proxy-qv82r" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:18.967452    9988 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:18.968910    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-457100
	I0731 21:55:18.968910    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.968910    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.968910    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.970197    9988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 21:55:18.971183    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.971183    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.971183    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.971244    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.971244    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.971244    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.971244    9988 round_trippers.go:580]     Audit-Id: 2eb7636d-0ac0-46c9-806b-7603216d0273
	I0731 21:55:18.971244    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-457100","namespace":"kube-system","uid":"90906408-1ad9-4b63-b67c-aa8e9aeb57f4","resourceVersion":"559","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59d82ef16a559d1bfc9b28786e5577d7","kubernetes.io/config.mirror":"59d82ef16a559d1bfc9b28786e5577d7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406429933Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0731 21:55:18.971912    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:18.971945    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:18.971945    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:18.972003    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:18.974185    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:18.974251    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:18.974251    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:18.974251    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:18 GMT
	I0731 21:55:18.974251    9988 round_trippers.go:580]     Audit-Id: a668d594-15ef-4523-a1c2-ba040e73576f
	I0731 21:55:18.974323    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:18.974323    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:18.974323    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:18.975014    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:18.975481    9988 pod_ready.go:92] pod "kube-scheduler-functional-457100" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:18.975540    9988 pod_ready.go:81] duration metric: took 8.0881ms for pod "kube-scheduler-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:18.975540    9988 pod_ready.go:38] duration metric: took 13.0879993s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:55:18.975596    9988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:55:18.991749    9988 command_runner.go:130] > -16
	I0731 21:55:18.991749    9988 ops.go:34] apiserver oom_adj: -16
	I0731 21:55:18.991860    9988 kubeadm.go:597] duration metric: took 22.6871615s to restartPrimaryControlPlane
	I0731 21:55:18.991860    9988 kubeadm.go:394] duration metric: took 22.8200121s to StartCluster
	I0731 21:55:18.991860    9988 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:55:18.991860    9988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:55:18.992961    9988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:55:18.994550    9988 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 21:55:18.994550    9988 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:55:18.994550    9988 addons.go:69] Setting storage-provisioner=true in profile "functional-457100"
	I0731 21:55:18.995357    9988 addons.go:234] Setting addon storage-provisioner=true in "functional-457100"
	W0731 21:55:18.995357    9988 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:55:18.995628    9988 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:55:18.995628    9988 host.go:66] Checking if "functional-457100" exists ...
	I0731 21:55:18.994550    9988 addons.go:69] Setting default-storageclass=true in profile "functional-457100"
	I0731 21:55:18.996369    9988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-457100"
	I0731 21:55:18.997183    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:55:18.997581    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:55:19.000904    9988 out.go:177] * Verifying Kubernetes components...
	I0731 21:55:19.019963    9988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:55:19.348000    9988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:55:19.384166    9988 node_ready.go:35] waiting up to 6m0s for node "functional-457100" to be "Ready" ...
	I0731 21:55:19.384166    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:19.384166    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:19.384166    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:19.384166    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:19.387890    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:19.388638    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:19.388638    9988 round_trippers.go:580]     Audit-Id: ce2a2035-29aa-4803-b67e-005afabd3310
	I0731 21:55:19.388638    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:19.388638    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:19.388638    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:19.388638    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:19.388638    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:19 GMT
	I0731 21:55:19.388839    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:19.389377    9988 node_ready.go:49] node "functional-457100" has status "Ready":"True"
	I0731 21:55:19.389377    9988 node_ready.go:38] duration metric: took 5.2108ms for node "functional-457100" to be "Ready" ...
	I0731 21:55:19.389505    9988 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:55:19.389548    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods
	I0731 21:55:19.389627    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:19.389627    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:19.389627    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:19.393870    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:19.394001    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:19.394001    9988 round_trippers.go:580]     Audit-Id: f4cea897-694a-4c63-8e18-6fab5d5d0059
	I0731 21:55:19.394001    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:19.394001    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:19.394001    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:19.394001    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:19.394001    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:19 GMT
	I0731 21:55:19.394716    9988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"505","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50185 chars]
	I0731 21:55:19.397080    9988 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2mpwg" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:19.397707    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2mpwg
	I0731 21:55:19.397707    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:19.397707    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:19.397707    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:19.400527    9988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 21:55:19.400612    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:19.400612    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:19.400612    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:19.400696    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:19.400696    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:19.400696    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:19 GMT
	I0731 21:55:19.400696    9988 round_trippers.go:580]     Audit-Id: e8c18e84-f95d-4a65-977f-582f6eb82d05
	I0731 21:55:19.400762    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"505","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6584 chars]
	I0731 21:55:19.401565    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:19.401653    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:19.401653    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:19.401717    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:19.408454    9988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 21:55:19.408454    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:19.408454    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:19.408454    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:19.409006    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:19.409006    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:19.409006    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:19 GMT
	I0731 21:55:19.409006    9988 round_trippers.go:580]     Audit-Id: 5e4fc6ca-46da-4fa4-80ef-fce77e877655
	I0731 21:55:19.409419    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:19.411279    9988 pod_ready.go:92] pod "coredns-7db6d8ff4d-2mpwg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:19.411279    9988 pod_ready.go:81] duration metric: took 14.1987ms for pod "coredns-7db6d8ff4d-2mpwg" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:19.411279    9988 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:19.557416    9988 request.go:629] Waited for 145.5536ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:19.557648    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/etcd-functional-457100
	I0731 21:55:19.557648    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:19.557648    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:19.557648    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:19.562248    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:19.562248    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:19.562248    9988 round_trippers.go:580]     Audit-Id: 6f6e48e5-b367-4426-9785-414f0b1dac61
	I0731 21:55:19.562248    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:19.562248    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:19.562248    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:19.562248    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:19.562248    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:19 GMT
	I0731 21:55:19.563599    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-457100","namespace":"kube-system","uid":"85862e21-0968-4d14-82d7-c68dbebdd097","resourceVersion":"563","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.30.24:2379","kubernetes.io/config.hash":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.mirror":"38f507eb43fb4e3716aa01cd3d32cec7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406424133Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6358 chars]
	I0731 21:55:19.748581    9988 request.go:629] Waited for 183.9015ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:19.748720    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:19.748720    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:19.748874    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:19.748874    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:19.752273    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:19.752791    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:19.752791    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:19 GMT
	I0731 21:55:19.752791    9988 round_trippers.go:580]     Audit-Id: 27759c2a-bf1d-47c9-acae-2ffb7370079f
	I0731 21:55:19.752791    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:19.752791    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:19.752791    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:19.752791    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:19.753064    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:19.753643    9988 pod_ready.go:92] pod "etcd-functional-457100" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:19.753643    9988 pod_ready.go:81] duration metric: took 342.3597ms for pod "etcd-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:19.753643    9988 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:19.955165    9988 request.go:629] Waited for 201.3695ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100
	I0731 21:55:19.955415    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100
	I0731 21:55:19.955564    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:19.955564    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:19.955564    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:19.959660    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:19.959660    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:19.959660    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:19 GMT
	I0731 21:55:19.959660    9988 round_trippers.go:580]     Audit-Id: 3da8a930-e613-4cad-bcd1-2121f1047e04
	I0731 21:55:19.959660    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:19.959660    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:19.959660    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:19.959660    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:19.960309    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-457100","namespace":"kube-system","uid":"526f4a67-6723-4cbf-a5c9-54d26df05040","resourceVersion":"571","creationTimestamp":"2024-07-31T21:52:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.30.24:8441","kubernetes.io/config.hash":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.mirror":"b1f21da6d6d77b6662df523b7b4dbe14","kubernetes.io/config.seen":"2024-07-31T21:52:30.221230253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7892 chars]
	I0731 21:55:20.145978    9988 request.go:629] Waited for 184.7215ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:20.145978    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:20.146114    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:20.146150    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:20.146150    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:20.151791    9988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 21:55:20.151791    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:20.151791    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:20.152737    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:20.152737    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:20 GMT
	I0731 21:55:20.152737    9988 round_trippers.go:580]     Audit-Id: d952ac8d-66b2-4b91-93ba-1baf73b342aa
	I0731 21:55:20.152737    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:20.152737    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:20.153100    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:20.153643    9988 pod_ready.go:92] pod "kube-apiserver-functional-457100" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:20.153762    9988 pod_ready.go:81] duration metric: took 400.1134ms for pod "kube-apiserver-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:20.153762    9988 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:20.351967    9988 request.go:629] Waited for 198.109ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-457100
	I0731 21:55:20.351967    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-457100
	I0731 21:55:20.351967    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:20.351967    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:20.351967    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:20.356994    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:20.356994    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:20.356994    9988 round_trippers.go:580]     Audit-Id: 7b5f2b6c-7814-4a34-ab06-8c4126ea8ed0
	I0731 21:55:20.356994    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:20.356994    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:20.356994    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:20.356994    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:20.356994    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:20 GMT
	I0731 21:55:20.357607    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-457100","namespace":"kube-system","uid":"5c261c1f-4da2-45d9-b196-8a188fa8d675","resourceVersion":"564","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5ced719098a864e0088b1886072b76d2","kubernetes.io/config.mirror":"5ced719098a864e0088b1886072b76d2","kubernetes.io/config.seen":"2024-07-31T21:52:37.406429033Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7467 chars]
	I0731 21:55:20.554979    9988 request.go:629] Waited for 196.7745ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:20.555468    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:20.555549    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:20.555549    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:20.555549    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:20.560496    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:20.560496    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:20.560610    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:20.560610    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:20.560610    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:20.560610    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:20.560610    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:20 GMT
	I0731 21:55:20.560793    9988 round_trippers.go:580]     Audit-Id: c9b403f6-9b51-4fe4-959d-c7bd83323eac
	I0731 21:55:20.561313    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:20.562218    9988 pod_ready.go:92] pod "kube-controller-manager-functional-457100" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:20.562218    9988 pod_ready.go:81] duration metric: took 408.4515ms for pod "kube-controller-manager-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:20.562218    9988 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv82r" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:20.746817    9988 request.go:629] Waited for 184.5156ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-proxy-qv82r
	I0731 21:55:20.747140    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-proxy-qv82r
	I0731 21:55:20.747140    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:20.747226    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:20.747226    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:20.750995    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:20.751851    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:20.751851    9988 round_trippers.go:580]     Audit-Id: 8f8bfc20-a8cb-4088-b358-902be1c1ed62
	I0731 21:55:20.752015    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:20.752015    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:20.752015    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:20.752015    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:20.752015    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:20 GMT
	I0731 21:55:20.752852    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv82r","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0bc1e99-23c4-4cba-8243-a17778aa26d0","resourceVersion":"504","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5c9411b8-c49d-4533-ab20-236608604d78","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5c9411b8-c49d-4533-ab20-236608604d78\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6030 chars]
	I0731 21:55:20.952697    9988 request.go:629] Waited for 198.3231ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:20.952795    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:20.952795    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:20.952795    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:20.952849    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:20.956495    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:20.956897    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:20.956897    9988 round_trippers.go:580]     Audit-Id: 446f64d7-990e-40eb-b647-72709b2be1a7
	I0731 21:55:20.956897    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:20.956897    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:20.956897    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:20.956897    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:20.957028    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:20 GMT
	I0731 21:55:20.957468    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:20.958180    9988 pod_ready.go:92] pod "kube-proxy-qv82r" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:20.958180    9988 pod_ready.go:81] duration metric: took 395.9565ms for pod "kube-proxy-qv82r" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:20.958260    9988 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:21.146509    9988 request.go:629] Waited for 188.0029ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-457100
	I0731 21:55:21.146509    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-457100
	I0731 21:55:21.146771    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:21.146859    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:21.146890    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:21.151480    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:21.151480    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:21.151480    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:21.151480    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:21.151647    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:21.151647    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:21 GMT
	I0731 21:55:21.151647    9988 round_trippers.go:580]     Audit-Id: 64c796a9-26bc-47c1-bab5-24955db43b03
	I0731 21:55:21.151647    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:21.151799    9988 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-457100","namespace":"kube-system","uid":"90906408-1ad9-4b63-b67c-aa8e9aeb57f4","resourceVersion":"559","creationTimestamp":"2024-07-31T21:52:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"59d82ef16a559d1bfc9b28786e5577d7","kubernetes.io/config.mirror":"59d82ef16a559d1bfc9b28786e5577d7","kubernetes.io/config.seen":"2024-07-31T21:52:37.406429933Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0731 21:55:21.238815    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:55:21.238815    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:55:21.240172    9988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:55:21.241464    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:55:21.241567    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:55:21.242027    9988 kapi.go:59] client config for functional-457100: &rest.Config{Host:"https://172.17.30.24:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-457100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-457100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 21:55:21.243772    9988 addons.go:234] Setting addon default-storageclass=true in "functional-457100"
	W0731 21:55:21.243772    9988 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:55:21.243772    9988 host.go:66] Checking if "functional-457100" exists ...
	I0731 21:55:21.245230    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:55:21.247442    9988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:55:21.255840    9988 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:55:21.255840    9988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:55:21.256779    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:55:21.351945    9988 request.go:629] Waited for 199.2671ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:21.352388    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes/functional-457100
	I0731 21:55:21.352388    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:21.352455    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:21.352455    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:21.356085    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:21.357072    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:21.357072    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:21.357150    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:21 GMT
	I0731 21:55:21.357195    9988 round_trippers.go:580]     Audit-Id: cceb20e9-c5ee-4ec0-9c96-c743e56eefb1
	I0731 21:55:21.357195    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:21.357195    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:21.357195    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:21.357470    9988 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-31T21:52:34Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0731 21:55:21.358038    9988 pod_ready.go:92] pod "kube-scheduler-functional-457100" in "kube-system" namespace has status "Ready":"True"
	I0731 21:55:21.358038    9988 pod_ready.go:81] duration metric: took 399.7732ms for pod "kube-scheduler-functional-457100" in "kube-system" namespace to be "Ready" ...
	I0731 21:55:21.358038    9988 pod_ready.go:38] duration metric: took 1.9685085s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:55:21.358038    9988 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:55:21.372975    9988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:55:21.413782    9988 command_runner.go:130] > 5883
	I0731 21:55:21.414427    9988 api_server.go:72] duration metric: took 2.419846s to wait for apiserver process to appear ...
	I0731 21:55:21.414427    9988 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:55:21.414427    9988 api_server.go:253] Checking apiserver healthz at https://172.17.30.24:8441/healthz ...
	I0731 21:55:21.422273    9988 api_server.go:279] https://172.17.30.24:8441/healthz returned 200:
	ok
	I0731 21:55:21.422400    9988 round_trippers.go:463] GET https://172.17.30.24:8441/version
	I0731 21:55:21.422400    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:21.422400    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:21.422533    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:21.424087    9988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 21:55:21.424494    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:21.424494    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:21.424494    9988 round_trippers.go:580]     Content-Length: 263
	I0731 21:55:21.424586    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:21 GMT
	I0731 21:55:21.424586    9988 round_trippers.go:580]     Audit-Id: 0511d9d8-171d-443c-93c3-19fe86bfd3d4
	I0731 21:55:21.424586    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:21.424661    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:21.424661    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:21.424661    9988 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0731 21:55:21.424781    9988 api_server.go:141] control plane version: v1.30.3
	I0731 21:55:21.425056    9988 api_server.go:131] duration metric: took 10.6294ms to wait for apiserver health ...
	I0731 21:55:21.425056    9988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:55:21.556259    9988 request.go:629] Waited for 131.0592ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods
	I0731 21:55:21.556481    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods
	I0731 21:55:21.556481    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:21.556481    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:21.556552    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:21.563017    9988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 21:55:21.563670    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:21.563670    9988 round_trippers.go:580]     Audit-Id: ba520e50-bab5-4e9f-b059-4bec513be2f1
	I0731 21:55:21.563670    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:21.563746    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:21.563746    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:21.563797    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:21.563797    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:21 GMT
	I0731 21:55:21.565743    9988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"505","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50185 chars]
	I0731 21:55:21.569904    9988 system_pods.go:59] 7 kube-system pods found
	I0731 21:55:21.569981    9988 system_pods.go:61] "coredns-7db6d8ff4d-2mpwg" [ee5651dc-9d65-4da3-82eb-2f60a206d462] Running
	I0731 21:55:21.569981    9988 system_pods.go:61] "etcd-functional-457100" [85862e21-0968-4d14-82d7-c68dbebdd097] Running
	I0731 21:55:21.569981    9988 system_pods.go:61] "kube-apiserver-functional-457100" [526f4a67-6723-4cbf-a5c9-54d26df05040] Running
	I0731 21:55:21.569981    9988 system_pods.go:61] "kube-controller-manager-functional-457100" [5c261c1f-4da2-45d9-b196-8a188fa8d675] Running
	I0731 21:55:21.569981    9988 system_pods.go:61] "kube-proxy-qv82r" [d0bc1e99-23c4-4cba-8243-a17778aa26d0] Running
	I0731 21:55:21.569981    9988 system_pods.go:61] "kube-scheduler-functional-457100" [90906408-1ad9-4b63-b67c-aa8e9aeb57f4] Running
	I0731 21:55:21.569981    9988 system_pods.go:61] "storage-provisioner" [ea03b13b-e26e-40a4-a87d-83eca7cf8355] Running
	I0731 21:55:21.569981    9988 system_pods.go:74] duration metric: took 144.9226ms to wait for pod list to return data ...
	I0731 21:55:21.569981    9988 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:55:21.746692    9988 request.go:629] Waited for 176.7093ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/namespaces/default/serviceaccounts
	I0731 21:55:21.747058    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/default/serviceaccounts
	I0731 21:55:21.747058    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:21.747138    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:21.747138    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:21.750583    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:21.751100    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:21.751100    9988 round_trippers.go:580]     Audit-Id: be0608b1-f8ed-48d7-9b48-59d3268d1ded
	I0731 21:55:21.751100    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:21.751100    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:21.751100    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:21.751100    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:21.751100    9988 round_trippers.go:580]     Content-Length: 261
	I0731 21:55:21.751100    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:21 GMT
	I0731 21:55:21.751100    9988 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f363a40d-f91b-4168-aceb-c65cba1f97ee","resourceVersion":"303","creationTimestamp":"2024-07-31T21:52:51Z"}}]}
	I0731 21:55:21.751488    9988 default_sa.go:45] found service account: "default"
	I0731 21:55:21.751599    9988 default_sa.go:55] duration metric: took 181.6158ms for default service account to be created ...
	I0731 21:55:21.751599    9988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:55:21.952324    9988 request.go:629] Waited for 200.6605ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods
	I0731 21:55:21.952543    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods
	I0731 21:55:21.952543    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:21.952605    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:21.952643    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:21.958987    9988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 21:55:21.958987    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:21.958987    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:21 GMT
	I0731 21:55:21.959070    9988 round_trippers.go:580]     Audit-Id: 4a6ba1b2-4da9-4072-9481-1cb25a213d22
	I0731 21:55:21.959070    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:21.959070    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:21.959070    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:21.959070    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:21.960331    9988 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-2mpwg","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ee5651dc-9d65-4da3-82eb-2f60a206d462","resourceVersion":"505","creationTimestamp":"2024-07-31T21:52:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cdad2079-b08a-40a1-93a7-eb32da5acbe1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T21:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdad2079-b08a-40a1-93a7-eb32da5acbe1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50185 chars]
	I0731 21:55:21.962661    9988 system_pods.go:86] 7 kube-system pods found
	I0731 21:55:21.962661    9988 system_pods.go:89] "coredns-7db6d8ff4d-2mpwg" [ee5651dc-9d65-4da3-82eb-2f60a206d462] Running
	I0731 21:55:21.962661    9988 system_pods.go:89] "etcd-functional-457100" [85862e21-0968-4d14-82d7-c68dbebdd097] Running
	I0731 21:55:21.962661    9988 system_pods.go:89] "kube-apiserver-functional-457100" [526f4a67-6723-4cbf-a5c9-54d26df05040] Running
	I0731 21:55:21.962661    9988 system_pods.go:89] "kube-controller-manager-functional-457100" [5c261c1f-4da2-45d9-b196-8a188fa8d675] Running
	I0731 21:55:21.962661    9988 system_pods.go:89] "kube-proxy-qv82r" [d0bc1e99-23c4-4cba-8243-a17778aa26d0] Running
	I0731 21:55:21.962661    9988 system_pods.go:89] "kube-scheduler-functional-457100" [90906408-1ad9-4b63-b67c-aa8e9aeb57f4] Running
	I0731 21:55:21.962661    9988 system_pods.go:89] "storage-provisioner" [ea03b13b-e26e-40a4-a87d-83eca7cf8355] Running
	I0731 21:55:21.962661    9988 system_pods.go:126] duration metric: took 211.0595ms to wait for k8s-apps to be running ...
	I0731 21:55:21.962661    9988 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:55:21.975819    9988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:55:22.001363    9988 system_svc.go:56] duration metric: took 38.7022ms WaitForService to wait for kubelet
	I0731 21:55:22.001363    9988 kubeadm.go:582] duration metric: took 3.0067755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:55:22.001363    9988 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:55:22.160479    9988 request.go:629] Waited for 158.9466ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.30.24:8441/api/v1/nodes
	I0731 21:55:22.160700    9988 round_trippers.go:463] GET https://172.17.30.24:8441/api/v1/nodes
	I0731 21:55:22.160700    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:22.160700    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:22.160700    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:22.165299    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:22.165372    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:22.165372    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:22.165372    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:22 GMT
	I0731 21:55:22.165372    9988 round_trippers.go:580]     Audit-Id: c18dd474-95b6-4644-9183-4513c9a6170e
	I0731 21:55:22.165476    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:22.165476    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:22.165517    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:22.165757    9988 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"functional-457100","uid":"0e07a34f-bcfd-4e42-8463-4819cc198739","resourceVersion":"490","creationTimestamp":"2024-07-31T21:52:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-457100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"functional-457100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T21_52_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4839 chars]
	I0731 21:55:22.166296    9988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:55:22.166396    9988 node_conditions.go:123] node cpu capacity is 2
	I0731 21:55:22.166396    9988 node_conditions.go:105] duration metric: took 165.0303ms to run NodePressure ...
	I0731 21:55:22.166396    9988 start.go:241] waiting for startup goroutines ...
	I0731 21:55:23.467708    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:55:23.467708    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:55:23.467823    9988 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:55:23.467823    9988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:55:23.467823    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:55:23.478864    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:55:23.478864    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:55:23.478864    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:55:25.699083    9988 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:55:25.699083    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:55:25.699175    9988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:55:26.086591    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:55:26.086591    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:55:26.087358    9988 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:55:26.228428    9988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:55:27.018273    9988 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0731 21:55:27.019072    9988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0731 21:55:27.019151    9988 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0731 21:55:27.019151    9988 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0731 21:55:27.019151    9988 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0731 21:55:27.019151    9988 command_runner.go:130] > pod/storage-provisioner configured
	I0731 21:55:28.253598    9988 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:55:28.253598    9988 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:55:28.254178    9988 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:55:28.390182    9988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:55:28.553803    9988 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0731 21:55:28.554659    9988 round_trippers.go:463] GET https://172.17.30.24:8441/apis/storage.k8s.io/v1/storageclasses
	I0731 21:55:28.554659    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:28.554659    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:28.554659    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:28.558810    9988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 21:55:28.558810    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:28.558810    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:28.558810    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:28.558810    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:28.558810    9988 round_trippers.go:580]     Content-Length: 1273
	I0731 21:55:28.558810    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:28 GMT
	I0731 21:55:28.558810    9988 round_trippers.go:580]     Audit-Id: de555ed9-b59d-47bb-ae24-2aa5dc83f469
	I0731 21:55:28.558810    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:28.558810    9988 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"578"},"items":[{"metadata":{"name":"standard","uid":"94d0b16f-560f-4a27-b35e-99d58bb21451","resourceVersion":"400","creationTimestamp":"2024-07-31T21:53:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-31T21:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0731 21:55:28.559676    9988 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"94d0b16f-560f-4a27-b35e-99d58bb21451","resourceVersion":"400","creationTimestamp":"2024-07-31T21:53:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-31T21:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0731 21:55:28.559676    9988 round_trippers.go:463] PUT https://172.17.30.24:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 21:55:28.559676    9988 round_trippers.go:469] Request Headers:
	I0731 21:55:28.559676    9988 round_trippers.go:473]     Content-Type: application/json
	I0731 21:55:28.559676    9988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 21:55:28.559676    9988 round_trippers.go:473]     Accept: application/json, */*
	I0731 21:55:28.563518    9988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 21:55:28.563518    9988 round_trippers.go:577] Response Headers:
	I0731 21:55:28.563518    9988 round_trippers.go:580]     Content-Length: 1220
	I0731 21:55:28.563518    9988 round_trippers.go:580]     Date: Wed, 31 Jul 2024 21:55:28 GMT
	I0731 21:55:28.563518    9988 round_trippers.go:580]     Audit-Id: 13ce5efb-ee2c-4ab6-89d5-cf4f6b5316bd
	I0731 21:55:28.563518    9988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 21:55:28.563518    9988 round_trippers.go:580]     Content-Type: application/json
	I0731 21:55:28.563953    9988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e44a6c97-ec60-48de-aeae-0c0bd379622d
	I0731 21:55:28.563953    9988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 793a4262-419d-4605-adab-8a4b63117b52
	I0731 21:55:28.564023    9988 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"94d0b16f-560f-4a27-b35e-99d58bb21451","resourceVersion":"400","creationTimestamp":"2024-07-31T21:53:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-31T21:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0731 21:55:28.573002    9988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 21:55:28.576025    9988 addons.go:510] duration metric: took 9.5813549s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 21:55:28.577228    9988 start.go:246] waiting for cluster config update ...
	I0731 21:55:28.577228    9988 start.go:255] writing updated cluster config ...
	I0731 21:55:28.589309    9988 ssh_runner.go:195] Run: rm -f paused
	I0731 21:55:28.722712    9988 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:55:28.731985    9988 out.go:177] * Done! kubectl is now configured to use "functional-457100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.741050603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.741447605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.790403637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.790938240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.791136941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.793454752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799421380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799486680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799499780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799611481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 cri-dockerd[4630]: time="2024-07-31T21:55:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 21:55:05 functional-457100 cri-dockerd[4630]: time="2024-07-31T21:55:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 21:55:05 functional-457100 cri-dockerd[4630]: time="2024-07-31T21:55:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325064572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325272173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325432774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325661875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.403697645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.404083647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.404418748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.405357753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675272733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675642334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675943436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.676359838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0903f5535e8c2       cbb01a7bd410d       2 minutes ago       Running             coredns                   2                   d8821f7263d96       coredns-7db6d8ff4d-2mpwg
	177da3b0c28ed       55bb025d2cfa5       2 minutes ago       Running             kube-proxy                1                   bedf1cdfe8b44       kube-proxy-qv82r
	df40e581f804b       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   35db2de887330       storage-provisioner
	d876a547bfa7a       76932a3b37d7e       2 minutes ago       Running             kube-controller-manager   2                   0be1c9fcc08f2       kube-controller-manager-functional-457100
	bf84eae8f955a       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   2e01279176ae8       etcd-functional-457100
	483090e067cd7       1f6d574d502f3       2 minutes ago       Running             kube-apiserver            2                   1a92960f0ddb1       kube-apiserver-functional-457100
	4516e9ce4adce       3edc18e7b7672       2 minutes ago       Running             kube-scheduler            1                   11cf2aabc43fb       kube-scheduler-functional-457100
	9fd1c3e9cc892       cbb01a7bd410d       2 minutes ago       Created             coredns                   1                   f6c70a5cd8361       coredns-7db6d8ff4d-2mpwg
	181a7bb8b9a5c       1f6d574d502f3       2 minutes ago       Exited              kube-apiserver            1                   476c48aee8076       kube-apiserver-functional-457100
	251a8872b9d7c       3861cfcd7c04c       2 minutes ago       Exited              etcd                      1                   e895872468f71       etcd-functional-457100
	d1049ec04e6b0       76932a3b37d7e       2 minutes ago       Exited              kube-controller-manager   1                   0c983bd8b69f0       kube-controller-manager-functional-457100
	9cc28c900527e       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   88bc9cae60560       storage-provisioner
	ca2408f549496       55bb025d2cfa5       4 minutes ago       Exited              kube-proxy                0                   06c280a26f162       kube-proxy-qv82r
	5138db35a0893       3edc18e7b7672       4 minutes ago       Exited              kube-scheduler            0                   ba2dfdeb46e2a       kube-scheduler-functional-457100
	
	
	==> coredns [0903f5535e8c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 01aaa6358818cd69629b54977c0657f45152893fa78c9d48c6346ee2574ccc481acb0dcbfbc0e50b53b225d48ae4f1bf11918b0a55e435e4bcc22cf9a5b1dfb7
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42060 - 61234 "HINFO IN 6184570427527991269.8098925889994911891. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031122049s
	
	
	==> coredns [9fd1c3e9cc89] <==
	
	
	==> describe nodes <==
	Name:               functional-457100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-457100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=functional-457100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_52_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:52:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-457100
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:57:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:57:06 +0000   Wed, 31 Jul 2024 21:52:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:57:06 +0000   Wed, 31 Jul 2024 21:52:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:57:06 +0000   Wed, 31 Jul 2024 21:52:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:57:06 +0000   Wed, 31 Jul 2024 21:52:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.30.24
	  Hostname:    functional-457100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 b15f0c15ffc44725bae8f40c3c512bc7
	  System UUID:                bac5ef1b-06ad-bb45-81a6-c2e3375fd311
	  Boot ID:                    e54b3175-3dcc-48f3-8965-55fadd4c9cd5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2mpwg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m22s
	  kube-system                 etcd-functional-457100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-apiserver-functional-457100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-controller-manager-functional-457100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-proxy-qv82r                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-scheduler-functional-457100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  NodeHasSufficientPID     4m36s                  kubelet          Node functional-457100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m36s                  kubelet          Node functional-457100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s                  kubelet          Node functional-457100 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m33s                  kubelet          Node functional-457100 status is now: NodeReady
	  Normal  RegisteredNode           4m23s                  node-controller  Node functional-457100 event: Registered Node functional-457100 in Controller
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node functional-457100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node functional-457100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node functional-457100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           116s                   node-controller  Node functional-457100 event: Registered Node functional-457100 in Controller
	
	
	==> dmesg <==
	[  +5.426675] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.747492] systemd-fstab-generator[1678]: Ignoring "noauto" option for root device
	[  +5.215916] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.095511] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.527104] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.109828] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.376143] systemd-fstab-generator[2513]: Ignoring "noauto" option for root device
	[  +0.211673] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.417454] kauditd_printk_skb: 88 callbacks suppressed
	[Jul31 21:53] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 21:54] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[  +0.619127] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +0.228265] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.262951] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +5.286858] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.030492] systemd-fstab-generator[4579]: Ignoring "noauto" option for root device
	[  +0.188751] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +0.206776] systemd-fstab-generator[4603]: Ignoring "noauto" option for root device
	[  +0.260917] systemd-fstab-generator[4618]: Ignoring "noauto" option for root device
	[  +0.888960] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.868668] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.066299] systemd-fstab-generator[5471]: Ignoring "noauto" option for root device
	[Jul31 21:55] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.064901] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.363300] systemd-fstab-generator[6438]: Ignoring "noauto" option for root device
	
	
	==> etcd [251a8872b9d7] <==
	{"level":"warn","ts":"2024-07-31T21:54:56.925994Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-31T21:54:56.926079Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.30.24:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.30.24:2380","--initial-cluster=functional-457100=https://172.17.30.24:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.30.24:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.30.24:2380","--name=functional-457100","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-07-31T21:54:56.926153Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-07-31T21:54:56.926177Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-31T21:54:56.926187Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.30.24:2380"]}
	{"level":"info","ts":"2024-07-31T21:54:56.926237Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:54:56.927366Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.30.24:2379"]}
	{"level":"info","ts":"2024-07-31T21:54:56.927563Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-457100","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.30.24:2380"],"listen-peer-urls":["https://172.17.30.24:2380"],"advertise-client-urls":["https://172.17.30.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.30.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-clust
er-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-07-31T21:54:56.937394Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"9.557884ms"}
	{"level":"info","ts":"2024-07-31T21:54:56.944098Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-31T21:54:56.950497Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"dee157b8623c8251","local-member-id":"4ea70d6a1db91416","commit-index":521}
	{"level":"info","ts":"2024-07-31T21:54:56.950655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-31T21:54:56.95085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 became follower at term 2"}
	{"level":"info","ts":"2024-07-31T21:54:56.950877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4ea70d6a1db91416 [peers: [], term: 2, commit: 521, applied: 0, lastindex: 521, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-31T21:54:56.962596Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	
	
	==> etcd [bf84eae8f955] <==
	{"level":"info","ts":"2024-07-31T21:55:01.010854Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:55:01.011027Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:55:01.011527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 switched to configuration voters=(5667513405485421590)"}
	{"level":"info","ts":"2024-07-31T21:55:01.011752Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dee157b8623c8251","local-member-id":"4ea70d6a1db91416","added-peer-id":"4ea70d6a1db91416","added-peer-peer-urls":["https://172.17.30.24:2380"]}
	{"level":"info","ts":"2024-07-31T21:55:01.021754Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:55:01.022447Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4ea70d6a1db91416","initial-advertise-peer-urls":["https://172.17.30.24:2380"],"listen-peer-urls":["https://172.17.30.24:2380"],"advertise-client-urls":["https://172.17.30.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.30.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:55:01.022655Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:55:01.023231Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.30.24:2380"}
	{"level":"info","ts":"2024-07-31T21:55:01.023405Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.30.24:2380"}
	{"level":"info","ts":"2024-07-31T21:55:01.024379Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dee157b8623c8251","local-member-id":"4ea70d6a1db91416","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:55:01.024457Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:55:02.129514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:55:02.129617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:55:02.129651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 received MsgPreVoteResp from 4ea70d6a1db91416 at term 2"}
	{"level":"info","ts":"2024-07-31T21:55:02.129666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:55:02.129842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 received MsgVoteResp from 4ea70d6a1db91416 at term 3"}
	{"level":"info","ts":"2024-07-31T21:55:02.129924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4ea70d6a1db91416 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:55:02.129992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4ea70d6a1db91416 elected leader 4ea70d6a1db91416 at term 3"}
	{"level":"info","ts":"2024-07-31T21:55:02.147297Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4ea70d6a1db91416","local-member-attributes":"{Name:functional-457100 ClientURLs:[https://172.17.30.24:2379]}","request-path":"/0/members/4ea70d6a1db91416/attributes","cluster-id":"dee157b8623c8251","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:55:02.147674Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:55:02.148049Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:55:02.148724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:55:02.148175Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:55:02.150638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:55:02.155556Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.30.24:2379"}
	
	
	==> kernel <==
	 21:57:13 up 6 min,  0 users,  load average: 0.78, 0.50, 0.21
	Linux functional-457100 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [181a7bb8b9a5] <==
	
	
	==> kube-apiserver [483090e067cd] <==
	I0731 21:55:03.756853       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 21:55:03.759460       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 21:55:03.759639       1 policy_source.go:224] refreshing policies
	I0731 21:55:03.771994       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 21:55:03.774137       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 21:55:03.774165       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 21:55:03.779278       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 21:55:03.779634       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 21:55:03.779728       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 21:55:03.788009       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 21:55:03.788333       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 21:55:03.788592       1 aggregator.go:165] initial CRD sync complete...
	I0731 21:55:03.788770       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 21:55:03.788996       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 21:55:03.789269       1 cache.go:39] Caches are synced for autoregister controller
	E0731 21:55:03.790952       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 21:55:03.805605       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 21:55:04.578069       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 21:55:05.615703       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 21:55:05.657201       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 21:55:05.769456       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 21:55:05.852173       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 21:55:05.872707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 21:55:17.073472       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 21:55:17.175449       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d1049ec04e6b] <==
	
	
	==> kube-controller-manager [d876a547bfa7] <==
	I0731 21:55:16.882033       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 21:55:16.968229       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 21:55:17.000397       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:55:17.010987       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 21:55:17.026365       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-457100\" does not exist"
	I0731 21:55:17.060047       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0731 21:55:17.063056       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 21:55:17.069092       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 21:55:17.069668       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 21:55:17.078075       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:55:17.078166       1 shared_informer.go:320] Caches are synced for node
	I0731 21:55:17.078381       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0731 21:55:17.078638       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0731 21:55:17.079401       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0731 21:55:17.079564       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0731 21:55:17.086037       1 shared_informer.go:320] Caches are synced for taint
	I0731 21:55:17.086583       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0731 21:55:17.088257       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-457100"
	I0731 21:55:17.088582       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 21:55:17.105868       1 shared_informer.go:320] Caches are synced for daemon sets
	I0731 21:55:17.109176       1 shared_informer.go:320] Caches are synced for TTL
	I0731 21:55:17.114962       1 shared_informer.go:320] Caches are synced for GC
	I0731 21:55:17.492383       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 21:55:17.492504       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 21:55:17.509675       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [177da3b0c28e] <==
	I0731 21:55:05.711242       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:55:05.735064       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.30.24"]
	I0731 21:55:05.813201       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:55:05.813260       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:55:05.813297       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:55:05.819139       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:55:05.819467       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:55:05.819502       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:55:05.821282       1 config.go:192] "Starting service config controller"
	I0731 21:55:05.821324       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:55:05.821351       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:55:05.821373       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:55:05.822147       1 config.go:319] "Starting node config controller"
	I0731 21:55:05.822181       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:55:05.922550       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:55:05.922692       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:55:05.922847       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ca2408f54949] <==
	I0731 21:52:53.171815       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:52:53.207390       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.30.24"]
	I0731 21:52:53.291500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:52:53.291571       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:52:53.291716       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:52:53.313686       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:52:53.314426       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:52:53.314461       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:52:53.316090       1 config.go:192] "Starting service config controller"
	I0731 21:52:53.316128       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:52:53.316161       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:52:53.316170       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:52:53.319818       1 config.go:319] "Starting node config controller"
	I0731 21:52:53.319854       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:52:53.417213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:52:53.417236       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:52:53.420009       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4516e9ce4adc] <==
	I0731 21:55:01.568633       1 serving.go:380] Generated self-signed cert in-memory
	W0731 21:55:03.677759       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:55:03.678105       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:55:03.678280       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:55:03.678428       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:55:03.720145       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:55:03.720451       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:55:03.723595       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:55:03.723878       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:55:03.724040       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:55:03.723934       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:55:03.825566       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [5138db35a089] <==
	E0731 21:52:35.406667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 21:52:35.524646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 21:52:35.525214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 21:52:35.575496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 21:52:35.575612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 21:52:35.577863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 21:52:35.577950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 21:52:35.587866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 21:52:35.588102       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 21:52:35.734264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 21:52:35.734407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 21:52:35.777547       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 21:52:35.777645       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:52:35.782595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 21:52:35.782814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 21:52:35.907888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 21:52:35.908457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 21:52:35.995994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 21:52:35.996508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 21:52:36.022565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 21:52:36.022693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 21:52:36.075133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 21:52:36.075248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0731 21:52:38.072093       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 21:54:40.250761       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 21:55:03 functional-457100 kubelet[5478]: E0731 21:55:03.923513    5478 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-457100\" already exists" pod="kube-system/kube-apiserver-functional-457100"
	Jul 31 21:55:03 functional-457100 kubelet[5478]: E0731 21:55:03.924177    5478 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-scheduler-functional-457100\" already exists" pod="kube-system/kube-scheduler-functional-457100"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.020863    5478 apiserver.go:52] "Watching apiserver"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.024331    5478 topology_manager.go:215] "Topology Admit Handler" podUID="d0bc1e99-23c4-4cba-8243-a17778aa26d0" podNamespace="kube-system" podName="kube-proxy-qv82r"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.024453    5478 topology_manager.go:215] "Topology Admit Handler" podUID="ee5651dc-9d65-4da3-82eb-2f60a206d462" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2mpwg"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.024554    5478 topology_manager.go:215] "Topology Admit Handler" podUID="ea03b13b-e26e-40a4-a87d-83eca7cf8355" podNamespace="kube-system" podName="storage-provisioner"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.040393    5478 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.089475    5478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea03b13b-e26e-40a4-a87d-83eca7cf8355-tmp\") pod \"storage-provisioner\" (UID: \"ea03b13b-e26e-40a4-a87d-83eca7cf8355\") " pod="kube-system/storage-provisioner"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.089600    5478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0bc1e99-23c4-4cba-8243-a17778aa26d0-xtables-lock\") pod \"kube-proxy-qv82r\" (UID: \"d0bc1e99-23c4-4cba-8243-a17778aa26d0\") " pod="kube-system/kube-proxy-qv82r"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.089666    5478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0bc1e99-23c4-4cba-8243-a17778aa26d0-lib-modules\") pod \"kube-proxy-qv82r\" (UID: \"d0bc1e99-23c4-4cba-8243-a17778aa26d0\") " pod="kube-system/kube-proxy-qv82r"
	Jul 31 21:55:04 functional-457100 kubelet[5478]: I0731 21:55:04.993031    5478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2"
	Jul 31 21:55:05 functional-457100 kubelet[5478]: I0731 21:55:05.005608    5478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a"
	Jul 31 21:55:05 functional-457100 kubelet[5478]: I0731 21:55:05.180273    5478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b"
	Jul 31 21:55:07 functional-457100 kubelet[5478]: I0731 21:55:07.268920    5478 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 21:55:10 functional-457100 kubelet[5478]: I0731 21:55:10.411125    5478 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 21:55:59 functional-457100 kubelet[5478]: E0731 21:55:59.144350    5478 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:55:59 functional-457100 kubelet[5478]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:55:59 functional-457100 kubelet[5478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:55:59 functional-457100 kubelet[5478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:55:59 functional-457100 kubelet[5478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:56:59 functional-457100 kubelet[5478]: E0731 21:56:59.142959    5478 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:56:59 functional-457100 kubelet[5478]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:56:59 functional-457100 kubelet[5478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:56:59 functional-457100 kubelet[5478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:56:59 functional-457100 kubelet[5478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [9cc28c900527] <==
	I0731 21:53:00.055839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:53:00.068902       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:53:00.069256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:53:00.086891       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:53:00.086922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"936c4cf9-fe06-4fe1-abf4-c315c41141af", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-457100_ce079c38-7195-425d-8bd4-f961fb5bd53b became leader
	I0731 21:53:00.087601       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-457100_ce079c38-7195-425d-8bd4-f961fb5bd53b!
	I0731 21:53:00.188840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-457100_ce079c38-7195-425d-8bd4-f961fb5bd53b!
	
	
	==> storage-provisioner [df40e581f804] <==
	I0731 21:55:05.486305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:55:05.526132       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:55:05.526224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:55:22.957994       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:55:22.959398       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"936c4cf9-fe06-4fe1-abf4-c315c41141af", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-457100_b583abfc-383f-48a6-9eea-924091dca997 became leader
	I0731 21:55:22.959729       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-457100_b583abfc-383f-48a6-9eea-924091dca997!
	I0731 21:55:23.060894       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-457100_b583abfc-383f-48a6-9eea-924091dca997!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:57:05.088760    6964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: (11.8235021s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-457100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (33.74s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (280.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-457100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 21:57:53.109171   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
functional_test.go:757: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-457100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m27.8052178s)

                                                
                                                
-- stdout --
	* [functional-457100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-457100" primary control-plane node in "functional-457100" cluster
	* Updating the running hyperv "functional-457100" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:57:26.853572    5528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 31 21:51:36 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.140187046Z" level=info msg="Starting up"
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.141624258Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.142486225Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.175887423Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201156889Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201203093Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201257297Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201271098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201335003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201528018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201710232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201799139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201818641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201834442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201919648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.202207371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204772070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204859777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204976886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205099996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205220905Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205420521Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230343360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230488171Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230708688Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230875701Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230898403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231018312Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231299534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231597557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231689564Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231708666Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231722667Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231735268Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231746669Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231758870Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231780371Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231795473Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231809774Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231821175Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231839476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231852477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231863578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231879679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231891680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231904081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231914882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231926183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231938184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231956985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231968086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231979187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231990188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232004489Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232025391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232044792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232081095Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232130099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232149400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232159901Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232172502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232181303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232192304Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232201404Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232644939Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232767448Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232843954Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.233061571Z" level=info msg="containerd successfully booted in 0.058039s"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.211045636Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.240939316Z" level=info msg="Loading containers: start."
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.390678137Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.602007177Z" level=info msg="Loading containers: done."
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.619716153Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.619930570Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.732100286Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.732180592Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:51:37 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.968558132Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.969564137Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:52:06 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970079639Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970169640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970185540Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:52:07 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:52:07 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:52:07 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.024577372Z" level=info msg="Starting up"
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.025709377Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.027071484Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1090
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.053618610Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081480643Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081627644Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081822145Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081930346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081963946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081980246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082218147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082315647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082338247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082374848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082418548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082600949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086192266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086318566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086490267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086584668Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086622868Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086647468Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086969670Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087041670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087143370Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087203171Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087348671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087451972Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087822574Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087985974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088025075Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088045875Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088061175Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088177275Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088213275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088245476Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088261476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088275676Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088304476Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088326276Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088362676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088395276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088411476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088424677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088438677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088475277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088624577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088644378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088659178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088675578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088689378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088702478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088716678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088733478Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088799578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088845079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088902079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088962979Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088979779Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088990479Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089002279Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089030379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089079380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089167480Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089644482Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089910684Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.090114085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.090226085Z" level=info msg="containerd successfully booted in 0.037255s"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.066981347Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.089158153Z" level=info msg="Loading containers: start."
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.219246473Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.331552709Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.422823545Z" level=info msg="Loading containers: done."
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.450086675Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.450273176Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.490051966Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:52:09 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.490184766Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.175969519Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.177623827Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:52:18 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.178621432Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.179203935Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.179380036Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:52:19 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:52:19 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:52:19 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.234105169Z" level=info msg="Starting up"
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.235095574Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.236131879Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1443
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.275426167Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301524391Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301558991Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301597291Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301611992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301678492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301714392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301870893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301964493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301985293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301997093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.302021994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.302143994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305397510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305512610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305660011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305818612Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305847912Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305866312Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306440615Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306551715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306575315Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306594115Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306608815Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306709616Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306971317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307133118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307263919Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307284819Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307297919Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307316719Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307328519Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307341119Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307364419Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307380319Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307392319Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307403019Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307421519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307435219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307447219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307464420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307481220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307493520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307510520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307523620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307535920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307549620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307560220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307570820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307582420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307596920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307626520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307719221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307737821Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307996322Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308088322Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308105623Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308117723Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308127123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308144223Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308253523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308551125Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308688625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308818426Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308877926Z" level=info msg="containerd successfully booted in 0.034832s"
	Jul 31 21:52:20 functional-457100 dockerd[1437]: time="2024-07-31T21:52:20.279586959Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.532073082Z" level=info msg="Loading containers: start."
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.650230345Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.757021555Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.843605068Z" level=info msg="Loading containers: done."
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.868852689Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.868975989Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.914700908Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:52:23 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.915624912Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.065885876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066057267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066078166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066169561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.108787445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111084026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111372011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111930982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.231160081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.231440467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.232339820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.235328564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.255725404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256029588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256067686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256307773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412421155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412556048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412576147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.413056722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518274050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518464340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518494438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518687628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688251810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688351305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688364504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688605392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742252102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742314999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742335498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742637882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.415081422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417072002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417090602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417535197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672128885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672286584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672303583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672947477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.726901345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730139613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730261011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730471309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818312143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818457541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818474141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818831037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453137809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453247308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453369207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453956802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505067581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505293379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505473978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505961274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1437]: time="2024-07-31T21:52:59.243892673Z" level=info msg="ignoring event" container=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244152976Z" level=info msg="shim disconnected" id=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244277078Z" level=warning msg="cleaning up after shim disconnected" id=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244289778Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1437]: time="2024-07-31T21:52:59.430677178Z" level=info msg="ignoring event" container=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.431622388Z" level=info msg="shim disconnected" id=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.432575999Z" level=warning msg="cleaning up after shim disconnected" id=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.432599699Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701141282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701647788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701982891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.702422796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972699298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972883800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972898900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.973235604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:40 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.158522051Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366096193Z" level=info msg="shim disconnected" id=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366171993Z" level=warning msg="cleaning up after shim disconnected" id=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366186393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.366917397Z" level=info msg="ignoring event" container=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370438217Z" level=info msg="shim disconnected" id=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370526217Z" level=warning msg="cleaning up after shim disconnected" id=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370560317Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397022963Z" level=info msg="ignoring event" container=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397098863Z" level=info msg="ignoring event" container=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397129763Z" level=info msg="ignoring event" container=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398294270Z" level=info msg="shim disconnected" id=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400206080Z" level=info msg="ignoring event" container=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400235180Z" level=info msg="ignoring event" container=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400282481Z" level=info msg="ignoring event" container=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400299081Z" level=info msg="ignoring event" container=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400322781Z" level=info msg="ignoring event" container=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400336981Z" level=info msg="ignoring event" container=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400357081Z" level=info msg="ignoring event" container=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405352709Z" level=warning msg="cleaning up after shim disconnected" id=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405527410Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405518110Z" level=info msg="shim disconnected" id=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.406977618Z" level=warning msg="cleaning up after shim disconnected" id=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.406992918Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398719672Z" level=info msg="shim disconnected" id=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.409870833Z" level=warning msg="cleaning up after shim disconnected" id=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.409920234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398696172Z" level=info msg="shim disconnected" id=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.410300636Z" level=warning msg="cleaning up after shim disconnected" id=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.410314936Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398775172Z" level=info msg="shim disconnected" id=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.412827850Z" level=warning msg="cleaning up after shim disconnected" id=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.412918650Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400683383Z" level=info msg="shim disconnected" id=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.416098568Z" level=warning msg="cleaning up after shim disconnected" id=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.416148268Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.432809660Z" level=info msg="ignoring event" container=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400657183Z" level=info msg="shim disconnected" id=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.432721259Z" level=info msg="shim disconnected" id=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.437529786Z" level=warning msg="cleaning up after shim disconnected" id=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.437674586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.402022290Z" level=info msg="shim disconnected" id=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.444567624Z" level=warning msg="cleaning up after shim disconnected" id=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.444581724Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400770683Z" level=info msg="shim disconnected" id=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.450439057Z" level=warning msg="cleaning up after shim disconnected" id=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.450487457Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.457761397Z" level=warning msg="cleaning up after shim disconnected" id=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.461575018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.528698587Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:54:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.553332023Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:54:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.240545103Z" level=info msg="shim disconnected" id=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1437]: time="2024-07-31T21:54:45.241462908Z" level=info msg="ignoring event" container=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.241761310Z" level=warning msg="cleaning up after shim disconnected" id=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.242634215Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.266303218Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.326406525Z" level=info msg="ignoring event" container=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327561619Z" level=info msg="shim disconnected" id=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327676319Z" level=warning msg="cleaning up after shim disconnected" id=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327690819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408104227Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408389825Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408538125Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408581925Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:54:51 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:54:51 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:54:51 functional-457100 systemd[1]: docker.service: Consumed 5.211s CPU time.
	Jul 31 21:54:51 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.462273945Z" level=info msg="Starting up"
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.463306441Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.464273336Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4365
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.491677521Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514730923Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514766823Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514851123Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514876123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514922823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514936023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515076022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515166022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515202521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515213321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515234621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515340521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518312108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518442208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518585707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518673307Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518751006Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518824006Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519144305Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519198005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519230304Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519245104Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519257204Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519301004Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519527603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519649003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519752202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519770102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519820602Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519844802Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519857702Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519873802Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519887502Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519898002Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519908002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519917502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519934301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519946101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519956401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519972901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519987901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519999101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520010001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520020401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520031001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520042701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520052401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520064501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520074901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520087901Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520104701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520115201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520132601Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520243100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520279600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520291800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520301900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520310100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520321600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520330300Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520565299Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520697898Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520826798Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520865998Z" level=info msg="containerd successfully booted in 0.030507s"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.509854424Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.544442499Z" level=info msg="Loading containers: start."
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.766225297Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.880382984Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.975441441Z" level=info msg="Loading containers: done."
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.002640143Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.002734843Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.049225501Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:54:53 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.050327898Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.596755794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.596967394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.600546486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.601065685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680141326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680651525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680770625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.681471624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.787834610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788128709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788238909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788745008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.801826082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.803337879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.803354379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.807862569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.156662041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157102141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157156641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157568240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455023173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455506272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455757172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.456589870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545423521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545907920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545933220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.546077820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.657368733Z" level=info msg="ignoring event" container=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.672943306Z" level=info msg="shim disconnected" id=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.673012506Z" level=warning msg="cleaning up after shim disconnected" id=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.673023106Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.720414627Z" level=info msg="ignoring event" container=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.720390327Z" level=info msg="shim disconnected" id=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.721135325Z" level=warning msg="cleaning up after shim disconnected" id=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.721237625Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.734252003Z" level=info msg="ignoring event" container=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735582901Z" level=info msg="shim disconnected" id=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735748001Z" level=warning msg="cleaning up after shim disconnected" id=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735761901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.770324643Z" level=info msg="shim disconnected" id=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.770521742Z" level=info msg="ignoring event" container=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.771116141Z" level=warning msg="cleaning up after shim disconnected" id=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.771238441Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825454850Z" level=info msg="shim disconnected" id=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825512350Z" level=warning msg="cleaning up after shim disconnected" id=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825523050Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.825710850Z" level=info msg="ignoring event" container=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.957732228Z" level=info msg="ignoring event" container=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958342127Z" level=info msg="shim disconnected" id=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958423726Z" level=warning msg="cleaning up after shim disconnected" id=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958446226Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4359]: time="2024-07-31T21:54:57.010679446Z" level=info msg="ignoring event" container=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011517745Z" level=info msg="shim disconnected" id=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011573245Z" level=warning msg="cleaning up after shim disconnected" id=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011583945Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.915260843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.916553549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.916865250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.917443053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.977907139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978170841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978187341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978621843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.019033834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020195140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020326840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020713542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084397044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084724345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084747945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.085373848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.317035345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318630753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318648353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318868154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519366003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519674205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519857506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.520226207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.532460165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533365469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533483570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533656571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.618768874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.618990475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.619552178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.619656978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.740274100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.740778002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.741050603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.741447605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.790403637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.790938240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.791136941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.793454752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799421380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799486680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799499780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799611481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325064572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325272173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325432774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325661875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.403697645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.404083647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.404418748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.405357753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675272733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675642334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675943436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.676359838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.106181204Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:58:43 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.297975932Z" level=info msg="ignoring event" container=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.298919937Z" level=info msg="ignoring event" container=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304623068Z" level=info msg="shim disconnected" id=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304753768Z" level=warning msg="cleaning up after shim disconnected" id=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304771769Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.305922375Z" level=info msg="shim disconnected" id=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.312090308Z" level=warning msg="cleaning up after shim disconnected" id=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.312184808Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.335289432Z" level=info msg="ignoring event" container=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335598134Z" level=info msg="shim disconnected" id=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335668334Z" level=warning msg="cleaning up after shim disconnected" id=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335680634Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.351413219Z" level=info msg="ignoring event" container=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352173623Z" level=info msg="shim disconnected" id=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352345424Z" level=warning msg="cleaning up after shim disconnected" id=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352953227Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.397610666Z" level=info msg="ignoring event" container=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.398677672Z" level=info msg="shim disconnected" id=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.398782972Z" level=warning msg="cleaning up after shim disconnected" id=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.399526776Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.402449292Z" level=info msg="ignoring event" container=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.402467392Z" level=info msg="shim disconnected" id=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.403097396Z" level=warning msg="cleaning up after shim disconnected" id=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.403243196Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.413937254Z" level=info msg="ignoring event" container=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.414504557Z" level=info msg="shim disconnected" id=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.414900059Z" level=warning msg="cleaning up after shim disconnected" id=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.415026660Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.429858739Z" level=info msg="ignoring event" container=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.429971840Z" level=info msg="ignoring event" container=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.430056440Z" level=info msg="ignoring event" container=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430200841Z" level=info msg="shim disconnected" id=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430412942Z" level=warning msg="cleaning up after shim disconnected" id=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430689343Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.442089105Z" level=info msg="shim disconnected" id=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.442229705Z" level=info msg="ignoring event" container=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447102231Z" level=warning msg="cleaning up after shim disconnected" id=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447257932Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447025231Z" level=info msg="shim disconnected" id=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.459716799Z" level=warning msg="cleaning up after shim disconnected" id=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.459728099Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447081731Z" level=info msg="shim disconnected" id=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.462555614Z" level=warning msg="cleaning up after shim disconnected" id=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.462567814Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.476556089Z" level=info msg="ignoring event" container=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.476821491Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:58:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479748706Z" level=info msg="shim disconnected" id=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479845307Z" level=warning msg="cleaning up after shim disconnected" id=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479857707Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.574091512Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:58:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4359]: time="2024-07-31T21:58:48.222487529Z" level=info msg="ignoring event" container=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222702130Z" level=info msg="shim disconnected" id=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222762830Z" level=warning msg="cleaning up after shim disconnected" id=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222834130Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.253468471Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.289266541Z" level=info msg="ignoring event" container=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.291442308Z" level=info msg="shim disconnected" id=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.291783303Z" level=warning msg="cleaning up after shim disconnected" id=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.292023000Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360036592Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360455786Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360615283Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360656983Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:58:54 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:58:54 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:58:54 functional-457100 systemd[1]: docker.service: Consumed 9.040s CPU time.
	Jul 31 21:58:54 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:58:54 functional-457100 dockerd[8563]: time="2024-07-31T21:58:54.414155271Z" level=info msg="Starting up"
	Jul 31 21:59:54 functional-457100 dockerd[8563]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 31 21:59:54 functional-457100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 31 21:59:54 functional-457100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 31 21:59:54 functional-457100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-457100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:761: restart took 2m28.0105708s for "functional-457100" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100: exit status 2 (11.5836227s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:59:54.885205    6328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs -n 25: (1m48.9101104s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-642600                                                         | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:53 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:53 UTC | 31 Jul 24 21:55 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:56 UTC |
	|         | minikube-local-cache-test:functional-457100                              |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache delete                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | minikube-local-cache-test:functional-457100                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| ssh     | functional-457100 ssh sudo                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-457100                                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh                                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache reload                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| ssh     | functional-457100 ssh                                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-457100 kubectl --                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | --context functional-457100                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:57 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:57:26
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:57:26.927643    5528 out.go:291] Setting OutFile to fd 1004 ...
	I0731 21:57:26.927643    5528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:57:26.927643    5528 out.go:304] Setting ErrFile to fd 632...
	I0731 21:57:26.927643    5528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:57:26.949590    5528 out.go:298] Setting JSON to false
	I0731 21:57:26.952108    5528 start.go:129] hostinfo: {"hostname":"minikube6","uptime":538988,"bootTime":1721924058,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 21:57:26.952528    5528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 21:57:26.959298    5528 out.go:177] * [functional-457100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 21:57:26.964607    5528 notify.go:220] Checking for updates...
	I0731 21:57:26.964607    5528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:57:26.967439    5528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:57:26.970249    5528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 21:57:26.973515    5528 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 21:57:26.976691    5528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:57:26.979807    5528 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:57:26.980399    5528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:57:32.264160    5528 out.go:177] * Using the hyperv driver based on existing profile
	I0731 21:57:32.267953    5528 start.go:297] selected driver: hyperv
	I0731 21:57:32.267953    5528 start.go:901] validating driver "hyperv" against &{Name:functional-457100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-457100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:57:32.268884    5528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:57:32.315738    5528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:57:32.315819    5528 cni.go:84] Creating CNI manager for ""
	I0731 21:57:32.315819    5528 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:57:32.316025    5528 start.go:340] cluster config:
	{Name:functional-457100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-457100 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:57:32.316317    5528 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:57:32.320340    5528 out.go:177] * Starting "functional-457100" primary control-plane node in "functional-457100" cluster
	I0731 21:57:32.322788    5528 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:57:32.323100    5528 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 21:57:32.323100    5528 cache.go:56] Caching tarball of preloaded images
	I0731 21:57:32.323338    5528 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 21:57:32.323338    5528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 21:57:32.323338    5528 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\config.json ...
	I0731 21:57:32.325700    5528 start.go:360] acquireMachinesLock for functional-457100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:57:32.325788    5528 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-457100"
	I0731 21:57:32.325788    5528 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:57:32.325788    5528 fix.go:54] fixHost starting: 
	I0731 21:57:32.326436    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:34.967628    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:34.967628    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:34.967756    5528 fix.go:112] recreateIfNeeded on functional-457100: state=Running err=<nil>
	W0731 21:57:34.967756    5528 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:57:34.975390    5528 out.go:177] * Updating the running hyperv "functional-457100" VM ...
	I0731 21:57:34.978017    5528 machine.go:94] provisionDockerMachine start ...
	I0731 21:57:34.978017    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:37.105317    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:37.105436    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:37.105436    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:39.603354    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:39.603354    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:39.608358    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:57:39.609019    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:57:39.609019    5528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:57:39.748380    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-457100
	
	I0731 21:57:39.748693    5528 buildroot.go:166] provisioning hostname "functional-457100"
	I0731 21:57:39.748693    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:41.792578    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:41.792578    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:41.793428    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:44.271040    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:44.271040    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:44.276689    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:57:44.276689    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:57:44.277267    5528 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-457100 && echo "functional-457100" | sudo tee /etc/hostname
	I0731 21:57:44.436515    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-457100
	
	I0731 21:57:44.436739    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:46.477429    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:46.477429    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:46.477429    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:48.889282    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:48.890411    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:48.896059    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:57:48.896596    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:57:48.896596    5528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-457100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-457100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-457100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:57:49.024499    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:57:49.024597    5528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 21:57:49.024676    5528 buildroot.go:174] setting up certificates
	I0731 21:57:49.024676    5528 provision.go:84] configureAuth start
	I0731 21:57:49.024755    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:51.113114    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:51.113114    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:51.113114    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:53.531097    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:53.531097    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:53.531321    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:55.598440    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:55.598440    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:55.598511    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:58.016890    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:58.016890    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:58.016890    5528 provision.go:143] copyHostCerts
	I0731 21:57:58.018552    5528 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 21:57:58.018552    5528 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 21:57:58.019004    5528 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 21:57:58.020501    5528 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 21:57:58.020501    5528 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 21:57:58.020501    5528 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 21:57:58.021952    5528 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 21:57:58.021952    5528 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 21:57:58.021952    5528 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 21:57:58.023924    5528 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-457100 san=[127.0.0.1 172.17.30.24 functional-457100 localhost minikube]
	I0731 21:57:58.292943    5528 provision.go:177] copyRemoteCerts
	I0731 21:57:58.303810    5528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:57:58.303810    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:00.407556    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:00.407556    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:00.407873    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:02.824869    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:02.824869    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:02.824869    5528 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:58:02.933291    5528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6294227s)
	I0731 21:58:02.934308    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:58:02.980921    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:58:03.023055    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:58:03.073092    5528 provision.go:87] duration metric: took 14.0482384s to configureAuth
	I0731 21:58:03.073092    5528 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:58:03.073893    5528 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:58:03.073983    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:05.150309    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:05.150495    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:05.150495    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:07.639358    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:07.639358    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:07.645155    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:07.645739    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:07.645739    5528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 21:58:07.783525    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 21:58:07.783525    5528 buildroot.go:70] root file system type: tmpfs
	I0731 21:58:07.783795    5528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 21:58:07.783915    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:09.914198    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:09.914198    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:09.914658    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:12.381894    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:12.381894    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:12.387676    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:12.388450    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:12.388998    5528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 21:58:12.550800    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 21:58:12.550852    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:14.612645    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:14.612645    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:14.612865    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:17.073516    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:17.073516    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:17.078570    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:17.079011    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:17.079082    5528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 21:58:17.228904    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:58:17.228959    5528 machine.go:97] duration metric: took 42.2504099s to provisionDockerMachine
	I0731 21:58:17.229008    5528 start.go:293] postStartSetup for "functional-457100" (driver="hyperv")
	I0731 21:58:17.229008    5528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:58:17.241725    5528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:58:17.241725    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:19.321269    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:19.321883    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:19.321996    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:21.772224    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:21.772224    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:21.773165    5528 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:58:21.882063    5528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6402795s)
	I0731 21:58:21.894033    5528 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:58:21.900013    5528 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:58:21.900013    5528 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 21:58:21.901113    5528 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 21:58:21.902062    5528 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 21:58:21.902062    5528 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\12332\hosts -> hosts in /etc/test/nested/copy/12332
	I0731 21:58:21.912080    5528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12332
	I0731 21:58:21.930707    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 21:58:21.980122    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\12332\hosts --> /etc/test/nested/copy/12332/hosts (40 bytes)
	I0731 21:58:22.024006    5528 start.go:296] duration metric: took 4.794937s for postStartSetup
	I0731 21:58:22.024006    5528 fix.go:56] duration metric: took 49.6975915s for fixHost
	I0731 21:58:22.024006    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:24.083114    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:24.083114    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:24.083591    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:26.565156    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:26.565156    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:26.571139    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:26.571885    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:26.571885    5528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:58:26.709281    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722463106.730228845
	
	I0731 21:58:26.709281    5528 fix.go:216] guest clock: 1722463106.730228845
	I0731 21:58:26.709393    5528 fix.go:229] Guest: 2024-07-31 21:58:26.730228845 +0000 UTC Remote: 2024-07-31 21:58:22.0240063 +0000 UTC m=+55.252016501 (delta=4.706222545s)
	I0731 21:58:26.709393    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:28.751522    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:28.751522    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:28.751522    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:31.200759    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:31.200759    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:31.206378    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:31.207043    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:31.207043    5528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722463106
	I0731 21:58:31.354112    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 21:58:26 UTC 2024
	
	I0731 21:58:31.354112    5528 fix.go:236] clock set: Wed Jul 31 21:58:26 UTC 2024
	 (err=<nil>)
	I0731 21:58:31.354112    5528 start.go:83] releasing machines lock for "functional-457100", held for 59.0275786s
	I0731 21:58:31.354350    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:33.386379    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:33.386379    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:33.386936    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:35.941566    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:35.941566    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:35.946236    5528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 21:58:35.946453    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:35.956544    5528 ssh_runner.go:195] Run: cat /version.json
	I0731 21:58:35.956544    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:38.234071    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:38.234169    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:38.234361    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:38.234776    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:38.234776    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:38.234776    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:40.939792    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:40.939792    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:40.939792    5528 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:58:40.967341    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:40.967859    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:40.968053    5528 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:58:41.038620    5528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0923192s)
	W0731 21:58:41.038620    5528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 21:58:41.072555    5528 ssh_runner.go:235] Completed: cat /version.json: (5.1159454s)
	I0731 21:58:41.085430    5528 ssh_runner.go:195] Run: systemctl --version
	I0731 21:58:41.110835    5528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:58:41.123591    5528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	W0731 21:58:41.132462    5528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 21:58:41.132462    5528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 21:58:41.137822    5528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:58:41.160554    5528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 21:58:41.160554    5528 start.go:495] detecting cgroup driver to use...
	I0731 21:58:41.160554    5528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:58:41.207597    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 21:58:41.240214    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 21:58:41.261067    5528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 21:58:41.272052    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 21:58:41.309920    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 21:58:41.349016    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 21:58:41.381716    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 21:58:41.415108    5528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:58:41.448549    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 21:58:41.481731    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 21:58:41.513363    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 21:58:41.545337    5528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:58:41.576421    5528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:58:41.607585    5528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:58:41.878501    5528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 21:58:41.911850    5528 start.go:495] detecting cgroup driver to use...
	I0731 21:58:41.925140    5528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 21:58:41.966675    5528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:58:42.001109    5528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:58:42.055297    5528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:58:42.091291    5528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 21:58:42.116548    5528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:58:42.166991    5528 ssh_runner.go:195] Run: which cri-dockerd
	I0731 21:58:42.185416    5528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 21:58:42.204423    5528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 21:58:42.251377    5528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 21:58:42.507276    5528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 21:58:42.766044    5528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 21:58:42.766360    5528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 21:58:42.806144    5528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:58:43.052920    5528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 21:59:54.416755    5528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3629216s)
	I0731 21:59:54.429558    5528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0731 21:59:54.512005    5528 out.go:177] 
	W0731 21:59:54.514737    5528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 31 21:51:36 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.140187046Z" level=info msg="Starting up"
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.141624258Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.142486225Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.175887423Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201156889Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201203093Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201257297Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201271098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201335003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201528018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201710232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201799139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201818641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201834442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201919648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.202207371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204772070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204859777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204976886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205099996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205220905Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205420521Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230343360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230488171Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230708688Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230875701Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230898403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231018312Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231299534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231597557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231689564Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231708666Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231722667Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231735268Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231746669Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231758870Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231780371Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231795473Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231809774Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231821175Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231839476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231852477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231863578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231879679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231891680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231904081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231914882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231926183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231938184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231956985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231968086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231979187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231990188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232004489Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232025391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232044792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232081095Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232130099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232149400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232159901Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232172502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232181303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232192304Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232201404Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232644939Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232767448Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232843954Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.233061571Z" level=info msg="containerd successfully booted in 0.058039s"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.211045636Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.240939316Z" level=info msg="Loading containers: start."
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.390678137Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.602007177Z" level=info msg="Loading containers: done."
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.619716153Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.619930570Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.732100286Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.732180592Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:51:37 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.968558132Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.969564137Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:52:06 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970079639Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970169640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970185540Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:52:07 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:52:07 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:52:07 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.024577372Z" level=info msg="Starting up"
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.025709377Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.027071484Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1090
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.053618610Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081480643Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081627644Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081822145Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081930346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081963946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081980246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082218147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082315647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082338247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082374848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082418548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082600949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086192266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086318566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086490267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086584668Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086622868Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086647468Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086969670Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087041670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087143370Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087203171Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087348671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087451972Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087822574Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087985974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088025075Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088045875Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088061175Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088177275Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088213275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088245476Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088261476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088275676Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088304476Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088326276Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088362676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088395276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088411476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088424677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088438677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088475277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088624577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088644378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088659178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088675578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088689378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088702478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088716678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088733478Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088799578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088845079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088902079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088962979Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088979779Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088990479Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089002279Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089030379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089079380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089167480Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089644482Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089910684Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.090114085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.090226085Z" level=info msg="containerd successfully booted in 0.037255s"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.066981347Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.089158153Z" level=info msg="Loading containers: start."
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.219246473Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.331552709Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.422823545Z" level=info msg="Loading containers: done."
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.450086675Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.450273176Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.490051966Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:52:09 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.490184766Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.175969519Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.177623827Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:52:18 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.178621432Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.179203935Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.179380036Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:52:19 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:52:19 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:52:19 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.234105169Z" level=info msg="Starting up"
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.235095574Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.236131879Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1443
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.275426167Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301524391Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301558991Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301597291Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301611992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301678492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301714392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301870893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301964493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301985293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301997093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.302021994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.302143994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305397510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305512610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305660011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305818612Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305847912Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305866312Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306440615Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306551715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306575315Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306594115Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306608815Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306709616Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306971317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307133118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307263919Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307284819Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307297919Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307316719Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307328519Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307341119Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307364419Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307380319Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307392319Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307403019Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307421519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307435219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307447219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307464420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307481220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307493520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307510520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307523620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307535920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307549620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307560220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307570820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307582420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307596920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307626520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307719221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307737821Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307996322Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308088322Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308105623Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308117723Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308127123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308144223Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308253523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308551125Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308688625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308818426Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308877926Z" level=info msg="containerd successfully booted in 0.034832s"
	Jul 31 21:52:20 functional-457100 dockerd[1437]: time="2024-07-31T21:52:20.279586959Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.532073082Z" level=info msg="Loading containers: start."
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.650230345Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.757021555Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.843605068Z" level=info msg="Loading containers: done."
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.868852689Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.868975989Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.914700908Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:52:23 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.915624912Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.065885876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066057267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066078166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066169561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.108787445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111084026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111372011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111930982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.231160081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.231440467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.232339820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.235328564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.255725404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256029588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256067686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256307773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412421155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412556048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412576147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.413056722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518274050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518464340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518494438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518687628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688251810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688351305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688364504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688605392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742252102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742314999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742335498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742637882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.415081422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417072002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417090602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417535197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672128885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672286584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672303583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672947477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.726901345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730139613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730261011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730471309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818312143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818457541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818474141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818831037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453137809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453247308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453369207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453956802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505067581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505293379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505473978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505961274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1437]: time="2024-07-31T21:52:59.243892673Z" level=info msg="ignoring event" container=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244152976Z" level=info msg="shim disconnected" id=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244277078Z" level=warning msg="cleaning up after shim disconnected" id=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244289778Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1437]: time="2024-07-31T21:52:59.430677178Z" level=info msg="ignoring event" container=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.431622388Z" level=info msg="shim disconnected" id=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.432575999Z" level=warning msg="cleaning up after shim disconnected" id=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.432599699Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701141282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701647788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701982891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.702422796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972699298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972883800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972898900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.973235604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:40 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.158522051Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366096193Z" level=info msg="shim disconnected" id=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366171993Z" level=warning msg="cleaning up after shim disconnected" id=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366186393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.366917397Z" level=info msg="ignoring event" container=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370438217Z" level=info msg="shim disconnected" id=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370526217Z" level=warning msg="cleaning up after shim disconnected" id=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370560317Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397022963Z" level=info msg="ignoring event" container=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397098863Z" level=info msg="ignoring event" container=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397129763Z" level=info msg="ignoring event" container=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398294270Z" level=info msg="shim disconnected" id=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400206080Z" level=info msg="ignoring event" container=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400235180Z" level=info msg="ignoring event" container=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400282481Z" level=info msg="ignoring event" container=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400299081Z" level=info msg="ignoring event" container=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400322781Z" level=info msg="ignoring event" container=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400336981Z" level=info msg="ignoring event" container=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400357081Z" level=info msg="ignoring event" container=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405352709Z" level=warning msg="cleaning up after shim disconnected" id=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405527410Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405518110Z" level=info msg="shim disconnected" id=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.406977618Z" level=warning msg="cleaning up after shim disconnected" id=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.406992918Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398719672Z" level=info msg="shim disconnected" id=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.409870833Z" level=warning msg="cleaning up after shim disconnected" id=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.409920234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398696172Z" level=info msg="shim disconnected" id=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.410300636Z" level=warning msg="cleaning up after shim disconnected" id=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.410314936Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398775172Z" level=info msg="shim disconnected" id=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.412827850Z" level=warning msg="cleaning up after shim disconnected" id=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.412918650Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400683383Z" level=info msg="shim disconnected" id=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.416098568Z" level=warning msg="cleaning up after shim disconnected" id=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.416148268Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.432809660Z" level=info msg="ignoring event" container=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400657183Z" level=info msg="shim disconnected" id=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.432721259Z" level=info msg="shim disconnected" id=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.437529786Z" level=warning msg="cleaning up after shim disconnected" id=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.437674586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.402022290Z" level=info msg="shim disconnected" id=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.444567624Z" level=warning msg="cleaning up after shim disconnected" id=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.444581724Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400770683Z" level=info msg="shim disconnected" id=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.450439057Z" level=warning msg="cleaning up after shim disconnected" id=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.450487457Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.457761397Z" level=warning msg="cleaning up after shim disconnected" id=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.461575018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.528698587Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:54:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.553332023Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:54:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.240545103Z" level=info msg="shim disconnected" id=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1437]: time="2024-07-31T21:54:45.241462908Z" level=info msg="ignoring event" container=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.241761310Z" level=warning msg="cleaning up after shim disconnected" id=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.242634215Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.266303218Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.326406525Z" level=info msg="ignoring event" container=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327561619Z" level=info msg="shim disconnected" id=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327676319Z" level=warning msg="cleaning up after shim disconnected" id=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327690819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408104227Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408389825Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408538125Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408581925Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:54:51 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:54:51 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:54:51 functional-457100 systemd[1]: docker.service: Consumed 5.211s CPU time.
	Jul 31 21:54:51 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.462273945Z" level=info msg="Starting up"
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.463306441Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.464273336Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4365
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.491677521Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514730923Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514766823Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514851123Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514876123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514922823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514936023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515076022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515166022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515202521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515213321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515234621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515340521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518312108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518442208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518585707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518673307Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518751006Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518824006Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519144305Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519198005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519230304Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519245104Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519257204Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519301004Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519527603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519649003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519752202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519770102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519820602Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519844802Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519857702Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519873802Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519887502Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519898002Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519908002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519917502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519934301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519946101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519956401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519972901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519987901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519999101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520010001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520020401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520031001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520042701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520052401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520064501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520074901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520087901Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520104701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520115201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520132601Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520243100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520279600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520291800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520301900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520310100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520321600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520330300Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520565299Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520697898Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520826798Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520865998Z" level=info msg="containerd successfully booted in 0.030507s"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.509854424Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.544442499Z" level=info msg="Loading containers: start."
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.766225297Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.880382984Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.975441441Z" level=info msg="Loading containers: done."
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.002640143Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.002734843Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.049225501Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:54:53 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.050327898Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.596755794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.596967394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.600546486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.601065685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680141326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680651525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680770625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.681471624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.787834610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788128709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788238909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788745008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.801826082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.803337879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.803354379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.807862569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.156662041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157102141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157156641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157568240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455023173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455506272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455757172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.456589870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545423521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545907920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545933220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.546077820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.657368733Z" level=info msg="ignoring event" container=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.672943306Z" level=info msg="shim disconnected" id=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.673012506Z" level=warning msg="cleaning up after shim disconnected" id=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.673023106Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.720414627Z" level=info msg="ignoring event" container=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.720390327Z" level=info msg="shim disconnected" id=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.721135325Z" level=warning msg="cleaning up after shim disconnected" id=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.721237625Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.734252003Z" level=info msg="ignoring event" container=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735582901Z" level=info msg="shim disconnected" id=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735748001Z" level=warning msg="cleaning up after shim disconnected" id=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735761901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.770324643Z" level=info msg="shim disconnected" id=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.770521742Z" level=info msg="ignoring event" container=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.771116141Z" level=warning msg="cleaning up after shim disconnected" id=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.771238441Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825454850Z" level=info msg="shim disconnected" id=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825512350Z" level=warning msg="cleaning up after shim disconnected" id=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825523050Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.825710850Z" level=info msg="ignoring event" container=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.957732228Z" level=info msg="ignoring event" container=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958342127Z" level=info msg="shim disconnected" id=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958423726Z" level=warning msg="cleaning up after shim disconnected" id=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958446226Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4359]: time="2024-07-31T21:54:57.010679446Z" level=info msg="ignoring event" container=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011517745Z" level=info msg="shim disconnected" id=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011573245Z" level=warning msg="cleaning up after shim disconnected" id=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011583945Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.915260843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.916553549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.916865250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.917443053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.977907139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978170841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978187341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978621843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.019033834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020195140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020326840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020713542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084397044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084724345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084747945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.085373848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.317035345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318630753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318648353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318868154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519366003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519674205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519857506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.520226207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.532460165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533365469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533483570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533656571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.618768874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.618990475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.619552178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.619656978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.740274100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.740778002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.741050603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.741447605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.790403637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.790938240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.791136941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.793454752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799421380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799486680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799499780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799611481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325064572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325272173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325432774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325661875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.403697645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.404083647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.404418748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.405357753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675272733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675642334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675943436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.676359838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.106181204Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:58:43 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.297975932Z" level=info msg="ignoring event" container=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.298919937Z" level=info msg="ignoring event" container=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304623068Z" level=info msg="shim disconnected" id=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304753768Z" level=warning msg="cleaning up after shim disconnected" id=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304771769Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.305922375Z" level=info msg="shim disconnected" id=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.312090308Z" level=warning msg="cleaning up after shim disconnected" id=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.312184808Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.335289432Z" level=info msg="ignoring event" container=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335598134Z" level=info msg="shim disconnected" id=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335668334Z" level=warning msg="cleaning up after shim disconnected" id=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335680634Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.351413219Z" level=info msg="ignoring event" container=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352173623Z" level=info msg="shim disconnected" id=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352345424Z" level=warning msg="cleaning up after shim disconnected" id=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352953227Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.397610666Z" level=info msg="ignoring event" container=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.398677672Z" level=info msg="shim disconnected" id=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.398782972Z" level=warning msg="cleaning up after shim disconnected" id=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.399526776Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.402449292Z" level=info msg="ignoring event" container=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.402467392Z" level=info msg="shim disconnected" id=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.403097396Z" level=warning msg="cleaning up after shim disconnected" id=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.403243196Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.413937254Z" level=info msg="ignoring event" container=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.414504557Z" level=info msg="shim disconnected" id=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.414900059Z" level=warning msg="cleaning up after shim disconnected" id=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.415026660Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.429858739Z" level=info msg="ignoring event" container=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.429971840Z" level=info msg="ignoring event" container=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.430056440Z" level=info msg="ignoring event" container=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430200841Z" level=info msg="shim disconnected" id=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430412942Z" level=warning msg="cleaning up after shim disconnected" id=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430689343Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.442089105Z" level=info msg="shim disconnected" id=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.442229705Z" level=info msg="ignoring event" container=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447102231Z" level=warning msg="cleaning up after shim disconnected" id=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447257932Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447025231Z" level=info msg="shim disconnected" id=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.459716799Z" level=warning msg="cleaning up after shim disconnected" id=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.459728099Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447081731Z" level=info msg="shim disconnected" id=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.462555614Z" level=warning msg="cleaning up after shim disconnected" id=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.462567814Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.476556089Z" level=info msg="ignoring event" container=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.476821491Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:58:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479748706Z" level=info msg="shim disconnected" id=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479845307Z" level=warning msg="cleaning up after shim disconnected" id=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479857707Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.574091512Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:58:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4359]: time="2024-07-31T21:58:48.222487529Z" level=info msg="ignoring event" container=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222702130Z" level=info msg="shim disconnected" id=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222762830Z" level=warning msg="cleaning up after shim disconnected" id=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222834130Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.253468471Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.289266541Z" level=info msg="ignoring event" container=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.291442308Z" level=info msg="shim disconnected" id=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.291783303Z" level=warning msg="cleaning up after shim disconnected" id=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.292023000Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360036592Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360455786Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360615283Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360656983Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:58:54 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:58:54 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:58:54 functional-457100 systemd[1]: docker.service: Consumed 9.040s CPU time.
	Jul 31 21:58:54 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:58:54 functional-457100 dockerd[8563]: time="2024-07-31T21:58:54.414155271Z" level=info msg="Starting up"
	Jul 31 21:59:54 functional-457100 dockerd[8563]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 31 21:59:54 functional-457100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 31 21:59:54 functional-457100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 31 21:59:54 functional-457100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0731 21:59:54.516331    5528 out.go:239] * 
	W0731 21:59:54.518709    5528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:59:54.523834    5528 out.go:177] 
	
	
	==> Docker <==
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID '181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID '4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824'"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="error getting RW layer size for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:00:54 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:00:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792'"
	Jul 31 22:00:54 functional-457100 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 31 22:00:54 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 22:00:54 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-31T22:00:56Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +14.376143] systemd-fstab-generator[2513]: Ignoring "noauto" option for root device
	[  +0.211673] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.417454] kauditd_printk_skb: 88 callbacks suppressed
	[Jul31 21:53] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 21:54] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[  +0.619127] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +0.228265] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.262951] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +5.286858] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.030492] systemd-fstab-generator[4579]: Ignoring "noauto" option for root device
	[  +0.188751] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +0.206776] systemd-fstab-generator[4603]: Ignoring "noauto" option for root device
	[  +0.260917] systemd-fstab-generator[4618]: Ignoring "noauto" option for root device
	[  +0.888960] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.868668] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.066299] systemd-fstab-generator[5471]: Ignoring "noauto" option for root device
	[Jul31 21:55] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.064901] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.363300] systemd-fstab-generator[6438]: Ignoring "noauto" option for root device
	[Jul31 21:58] systemd-fstab-generator[8096]: Ignoring "noauto" option for root device
	[  +0.177086] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.477885] systemd-fstab-generator[8132]: Ignoring "noauto" option for root device
	[  +0.260145] systemd-fstab-generator[8145]: Ignoring "noauto" option for root device
	[  +0.286174] systemd-fstab-generator[8159]: Ignoring "noauto" option for root device
	[  +5.318283] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 22:01:55 up 11 min,  0 users,  load average: 0.08, 0.24, 0.17
	Linux functional-457100 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 22:01:49 functional-457100 kubelet[5478]: I0731 22:01:49.084064    5478 status_manager.go:853] "Failed to get status for pod" podUID="b1f21da6d6d77b6662df523b7b4dbe14" pod="kube-system/kube-apiserver-functional-457100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:01:51 functional-457100 kubelet[5478]: E0731 22:01:51.417833    5478 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m8.841808377s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 31 22:01:52 functional-457100 kubelet[5478]: E0731 22:01:52.198089    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?resourceVersion=0&timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:01:52 functional-457100 kubelet[5478]: E0731 22:01:52.199119    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:01:52 functional-457100 kubelet[5478]: E0731 22:01:52.200468    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:01:52 functional-457100 kubelet[5478]: E0731 22:01:52.201912    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:01:52 functional-457100 kubelet[5478]: E0731 22:01:52.203044    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:01:52 functional-457100 kubelet[5478]: E0731 22:01:52.203146    5478 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.344873    5478 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-457100.17e76b15c590a39b\": dial tcp 172.17.30.24:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-457100.17e76b15c590a39b  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-457100,UID:b1f21da6d6d77b6662df523b7b4dbe14,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.17.30.24:8441/readyz\": dial tcp 172.17.30.24:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-457100,},FirstTimestamp:2024-07-31 21:58:43.745579931 +0000 UTC m=+224.870696242,LastTimestamp:2
024-07-31 21:58:44.746068994 +0000 UTC m=+225.871185305,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-457100,}"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995295    5478 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995339    5478 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995369    5478 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995401    5478 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: I0731 22:01:54.995434    5478 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995533    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995561    5478 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995584    5478 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995603    5478 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995618    5478 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995668    5478 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995695    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.995716    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.996385    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.996459    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:01:54 functional-457100 kubelet[5478]: E0731 22:01:54.996921    5478 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:00:06.475149    7252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0731 22:00:54.643589    7252 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:00:54.681438    7252 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:00:54.710783    7252 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:00:54.743383    7252 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:00:54.774075    7252 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:00:54.806513    7252 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:00:54.834554    7252 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:00:54.868776    7252 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: exit status 2 (11.594927s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:01:55.713900    4688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-457100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (280.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (120.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-457100 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-457100 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (2.1817091s)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-457100 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100: exit status 2 (11.3623609s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:02:09.491830   11784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs -n 25
E0731 22:02:53.107314   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs -n 25: (1m35.0769322s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-642600 --log_dir                                                  | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-642600                                                         | nospam-642600     | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:53 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:53 UTC | 31 Jul 24 21:55 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:55 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache add                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:55 UTC | 31 Jul 24 21:56 UTC |
	|         | minikube-local-cache-test:functional-457100                              |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache delete                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | minikube-local-cache-test:functional-457100                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| ssh     | functional-457100 ssh sudo                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-457100                                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh                                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache reload                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| ssh     | functional-457100 ssh                                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-457100 kubectl --                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | --context functional-457100                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:57 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:57:26
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:57:26.927643    5528 out.go:291] Setting OutFile to fd 1004 ...
	I0731 21:57:26.927643    5528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:57:26.927643    5528 out.go:304] Setting ErrFile to fd 632...
	I0731 21:57:26.927643    5528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:57:26.949590    5528 out.go:298] Setting JSON to false
	I0731 21:57:26.952108    5528 start.go:129] hostinfo: {"hostname":"minikube6","uptime":538988,"bootTime":1721924058,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 21:57:26.952528    5528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 21:57:26.959298    5528 out.go:177] * [functional-457100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 21:57:26.964607    5528 notify.go:220] Checking for updates...
	I0731 21:57:26.964607    5528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:57:26.967439    5528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:57:26.970249    5528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 21:57:26.973515    5528 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 21:57:26.976691    5528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:57:26.979807    5528 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:57:26.980399    5528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:57:32.264160    5528 out.go:177] * Using the hyperv driver based on existing profile
	I0731 21:57:32.267953    5528 start.go:297] selected driver: hyperv
	I0731 21:57:32.267953    5528 start.go:901] validating driver "hyperv" against &{Name:functional-457100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-457100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:57:32.268884    5528 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:57:32.315738    5528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:57:32.315819    5528 cni.go:84] Creating CNI manager for ""
	I0731 21:57:32.315819    5528 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:57:32.316025    5528 start.go:340] cluster config:
	{Name:functional-457100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-457100 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.30.24 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:57:32.316317    5528 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:57:32.320340    5528 out.go:177] * Starting "functional-457100" primary control-plane node in "functional-457100" cluster
	I0731 21:57:32.322788    5528 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:57:32.323100    5528 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 21:57:32.323100    5528 cache.go:56] Caching tarball of preloaded images
	I0731 21:57:32.323338    5528 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 21:57:32.323338    5528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 21:57:32.323338    5528 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\config.json ...
	I0731 21:57:32.325700    5528 start.go:360] acquireMachinesLock for functional-457100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:57:32.325788    5528 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-457100"
	I0731 21:57:32.325788    5528 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:57:32.325788    5528 fix.go:54] fixHost starting: 
	I0731 21:57:32.326436    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:34.967628    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:34.967628    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:34.967756    5528 fix.go:112] recreateIfNeeded on functional-457100: state=Running err=<nil>
	W0731 21:57:34.967756    5528 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:57:34.975390    5528 out.go:177] * Updating the running hyperv "functional-457100" VM ...
	I0731 21:57:34.978017    5528 machine.go:94] provisionDockerMachine start ...
	I0731 21:57:34.978017    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:37.105317    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:37.105436    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:37.105436    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:39.603354    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:39.603354    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:39.608358    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:57:39.609019    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:57:39.609019    5528 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:57:39.748380    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-457100
	
	I0731 21:57:39.748693    5528 buildroot.go:166] provisioning hostname "functional-457100"
	I0731 21:57:39.748693    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:41.792578    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:41.792578    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:41.793428    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:44.271040    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:44.271040    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:44.276689    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:57:44.276689    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:57:44.277267    5528 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-457100 && echo "functional-457100" | sudo tee /etc/hostname
	I0731 21:57:44.436515    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-457100
	
	I0731 21:57:44.436739    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:46.477429    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:46.477429    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:46.477429    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:48.889282    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:48.890411    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:48.896059    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:57:48.896596    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:57:48.896596    5528 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-457100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-457100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-457100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:57:49.024499    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:57:49.024597    5528 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 21:57:49.024676    5528 buildroot.go:174] setting up certificates
	I0731 21:57:49.024676    5528 provision.go:84] configureAuth start
	I0731 21:57:49.024755    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:51.113114    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:51.113114    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:51.113114    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:53.531097    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:53.531097    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:53.531321    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:57:55.598440    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:57:55.598440    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:55.598511    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:57:58.016890    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:57:58.016890    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:57:58.016890    5528 provision.go:143] copyHostCerts
	I0731 21:57:58.018552    5528 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 21:57:58.018552    5528 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 21:57:58.019004    5528 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 21:57:58.020501    5528 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 21:57:58.020501    5528 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 21:57:58.020501    5528 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 21:57:58.021952    5528 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 21:57:58.021952    5528 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 21:57:58.021952    5528 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 21:57:58.023924    5528 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-457100 san=[127.0.0.1 172.17.30.24 functional-457100 localhost minikube]
	I0731 21:57:58.292943    5528 provision.go:177] copyRemoteCerts
	I0731 21:57:58.303810    5528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:57:58.303810    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:00.407556    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:00.407556    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:00.407873    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:02.824869    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:02.824869    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:02.824869    5528 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:58:02.933291    5528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6294227s)
	I0731 21:58:02.934308    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:58:02.980921    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:58:03.023055    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:58:03.073092    5528 provision.go:87] duration metric: took 14.0482384s to configureAuth
	I0731 21:58:03.073092    5528 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:58:03.073893    5528 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 21:58:03.073983    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:05.150309    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:05.150495    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:05.150495    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:07.639358    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:07.639358    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:07.645155    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:07.645739    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:07.645739    5528 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 21:58:07.783525    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 21:58:07.783525    5528 buildroot.go:70] root file system type: tmpfs
	I0731 21:58:07.783795    5528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 21:58:07.783915    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:09.914198    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:09.914198    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:09.914658    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:12.381894    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:12.381894    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:12.387676    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:12.388450    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:12.388998    5528 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 21:58:12.550800    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 21:58:12.550852    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:14.612645    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:14.612645    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:14.612865    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:17.073516    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:17.073516    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:17.078570    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:17.079011    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:17.079082    5528 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 21:58:17.228904    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:58:17.228959    5528 machine.go:97] duration metric: took 42.2504099s to provisionDockerMachine
	I0731 21:58:17.229008    5528 start.go:293] postStartSetup for "functional-457100" (driver="hyperv")
	I0731 21:58:17.229008    5528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:58:17.241725    5528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:58:17.241725    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:19.321269    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:19.321883    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:19.321996    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:21.772224    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:21.772224    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:21.773165    5528 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:58:21.882063    5528 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6402795s)
	I0731 21:58:21.894033    5528 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:58:21.900013    5528 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:58:21.900013    5528 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 21:58:21.901113    5528 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 21:58:21.902062    5528 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 21:58:21.902062    5528 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\12332\hosts -> hosts in /etc/test/nested/copy/12332
	I0731 21:58:21.912080    5528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12332
	I0731 21:58:21.930707    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 21:58:21.980122    5528 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\12332\hosts --> /etc/test/nested/copy/12332/hosts (40 bytes)
	I0731 21:58:22.024006    5528 start.go:296] duration metric: took 4.794937s for postStartSetup
	I0731 21:58:22.024006    5528 fix.go:56] duration metric: took 49.6975915s for fixHost
	I0731 21:58:22.024006    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:24.083114    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:24.083114    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:24.083591    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:26.565156    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:26.565156    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:26.571139    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:26.571885    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:26.571885    5528 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:58:26.709281    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722463106.730228845
	
	I0731 21:58:26.709281    5528 fix.go:216] guest clock: 1722463106.730228845
	I0731 21:58:26.709393    5528 fix.go:229] Guest: 2024-07-31 21:58:26.730228845 +0000 UTC Remote: 2024-07-31 21:58:22.0240063 +0000 UTC m=+55.252016501 (delta=4.706222545s)
	I0731 21:58:26.709393    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:28.751522    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:28.751522    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:28.751522    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:31.200759    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:31.200759    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:31.206378    5528 main.go:141] libmachine: Using SSH client type: native
	I0731 21:58:31.207043    5528 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.30.24 22 <nil> <nil>}
	I0731 21:58:31.207043    5528 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722463106
	I0731 21:58:31.354112    5528 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 21:58:26 UTC 2024
	
	I0731 21:58:31.354112    5528 fix.go:236] clock set: Wed Jul 31 21:58:26 UTC 2024
	 (err=<nil>)
	I0731 21:58:31.354112    5528 start.go:83] releasing machines lock for "functional-457100", held for 59.0275786s
	I0731 21:58:31.354350    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:33.386379    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:33.386379    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:33.386936    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:35.941566    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:35.941566    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:35.946236    5528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 21:58:35.946453    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:35.956544    5528 ssh_runner.go:195] Run: cat /version.json
	I0731 21:58:35.956544    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
	I0731 21:58:38.234071    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:38.234169    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:38.234361    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:38.234776    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 21:58:38.234776    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:38.234776    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
	I0731 21:58:40.939792    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:40.939792    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:40.939792    5528 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:58:40.967341    5528 main.go:141] libmachine: [stdout =====>] : 172.17.30.24
	
	I0731 21:58:40.967859    5528 main.go:141] libmachine: [stderr =====>] : 
	I0731 21:58:40.968053    5528 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
	I0731 21:58:41.038620    5528 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0923192s)
	W0731 21:58:41.038620    5528 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 21:58:41.072555    5528 ssh_runner.go:235] Completed: cat /version.json: (5.1159454s)
	I0731 21:58:41.085430    5528 ssh_runner.go:195] Run: systemctl --version
	I0731 21:58:41.110835    5528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:58:41.123591    5528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	W0731 21:58:41.132462    5528 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 21:58:41.132462    5528 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 21:58:41.137822    5528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:58:41.160554    5528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 21:58:41.160554    5528 start.go:495] detecting cgroup driver to use...
	I0731 21:58:41.160554    5528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:58:41.207597    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 21:58:41.240214    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 21:58:41.261067    5528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 21:58:41.272052    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 21:58:41.309920    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 21:58:41.349016    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 21:58:41.381716    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 21:58:41.415108    5528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:58:41.448549    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 21:58:41.481731    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 21:58:41.513363    5528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 21:58:41.545337    5528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:58:41.576421    5528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:58:41.607585    5528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:58:41.878501    5528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 21:58:41.911850    5528 start.go:495] detecting cgroup driver to use...
	I0731 21:58:41.925140    5528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 21:58:41.966675    5528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:58:42.001109    5528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:58:42.055297    5528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:58:42.091291    5528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 21:58:42.116548    5528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:58:42.166991    5528 ssh_runner.go:195] Run: which cri-dockerd
	I0731 21:58:42.185416    5528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 21:58:42.204423    5528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 21:58:42.251377    5528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 21:58:42.507276    5528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 21:58:42.766044    5528 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 21:58:42.766360    5528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 21:58:42.806144    5528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:58:43.052920    5528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 21:59:54.416755    5528 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3629216s)
	I0731 21:59:54.429558    5528 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0731 21:59:54.512005    5528 out.go:177] 
	W0731 21:59:54.514737    5528 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 31 21:51:36 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.140187046Z" level=info msg="Starting up"
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.141624258Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:51:36 functional-457100 dockerd[665]: time="2024-07-31T21:51:36.142486225Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.175887423Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201156889Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201203093Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201257297Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201271098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201335003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201528018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201710232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201799139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201818641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201834442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.201919648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.202207371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204772070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204859777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.204976886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205099996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205220905Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.205420521Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230343360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230488171Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230708688Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230875701Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.230898403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231018312Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231299534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231597557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231689564Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231708666Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231722667Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231735268Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231746669Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231758870Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231780371Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231795473Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231809774Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231821175Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231839476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231852477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231863578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231879679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231891680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231904081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231914882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231926183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231938184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231956985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231968086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231979187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.231990188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232004489Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232025391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232044792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232081095Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232130099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232149400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232159901Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232172502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232181303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232192304Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232201404Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232644939Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232767448Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.232843954Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:51:36 functional-457100 dockerd[671]: time="2024-07-31T21:51:36.233061571Z" level=info msg="containerd successfully booted in 0.058039s"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.211045636Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.240939316Z" level=info msg="Loading containers: start."
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.390678137Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.602007177Z" level=info msg="Loading containers: done."
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.619716153Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.619930570Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.732100286Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:51:37 functional-457100 dockerd[665]: time="2024-07-31T21:51:37.732180592Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:51:37 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.968558132Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.969564137Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:52:06 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970079639Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970169640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:52:06 functional-457100 dockerd[665]: time="2024-07-31T21:52:06.970185540Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:52:07 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:52:07 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:52:07 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.024577372Z" level=info msg="Starting up"
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.025709377Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:52:08 functional-457100 dockerd[1083]: time="2024-07-31T21:52:08.027071484Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1090
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.053618610Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081480643Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081627644Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081822145Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081930346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081963946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.081980246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082218147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082315647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082338247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082374848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082418548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.082600949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086192266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086318566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086490267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086584668Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086622868Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086647468Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.086969670Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087041670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087143370Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087203171Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087348671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087451972Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087822574Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.087985974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088025075Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088045875Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088061175Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088177275Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088213275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088245476Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088261476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088275676Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088304476Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088326276Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088362676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088395276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088411476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088424677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088438677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088475277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088624577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088644378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088659178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088675578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088689378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088702478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088716678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088733478Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088799578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088845079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088902079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088962979Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088979779Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.088990479Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089002279Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089030379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089079380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089167480Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089644482Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.089910684Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.090114085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:52:08 functional-457100 dockerd[1090]: time="2024-07-31T21:52:08.090226085Z" level=info msg="containerd successfully booted in 0.037255s"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.066981347Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.089158153Z" level=info msg="Loading containers: start."
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.219246473Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.331552709Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.422823545Z" level=info msg="Loading containers: done."
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.450086675Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.450273176Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.490051966Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:52:09 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:09 functional-457100 dockerd[1083]: time="2024-07-31T21:52:09.490184766Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.175969519Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.177623827Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:52:18 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.178621432Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.179203935Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:52:18 functional-457100 dockerd[1083]: time="2024-07-31T21:52:18.179380036Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:52:19 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:52:19 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:52:19 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.234105169Z" level=info msg="Starting up"
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.235095574Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:52:19 functional-457100 dockerd[1437]: time="2024-07-31T21:52:19.236131879Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1443
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.275426167Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301524391Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301558991Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301597291Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301611992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301678492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301714392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301870893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301964493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301985293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.301997093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.302021994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.302143994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305397510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305512610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305660011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305818612Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305847912Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.305866312Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306440615Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306551715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306575315Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306594115Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306608815Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306709616Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.306971317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307133118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307263919Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307284819Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307297919Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307316719Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307328519Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307341119Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307364419Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307380319Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307392319Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307403019Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307421519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307435219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307447219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307464420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307481220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307493520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307510520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307523620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307535920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307549620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307560220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307570820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307582420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307596920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307626520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307719221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307737821Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.307996322Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308088322Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308105623Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308117723Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308127123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308144223Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308253523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308551125Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308688625Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308818426Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:52:19 functional-457100 dockerd[1443]: time="2024-07-31T21:52:19.308877926Z" level=info msg="containerd successfully booted in 0.034832s"
	Jul 31 21:52:20 functional-457100 dockerd[1437]: time="2024-07-31T21:52:20.279586959Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.532073082Z" level=info msg="Loading containers: start."
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.650230345Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.757021555Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.843605068Z" level=info msg="Loading containers: done."
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.868852689Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.868975989Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.914700908Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:52:23 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:52:23 functional-457100 dockerd[1437]: time="2024-07-31T21:52:23.915624912Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.065885876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066057267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066078166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.066169561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.108787445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111084026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111372011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.111930982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.231160081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.231440467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.232339820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.235328564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.255725404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256029588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256067686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.256307773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412421155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412556048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.412576147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.413056722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518274050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518464340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518494438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.518687628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688251810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688351305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688364504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.688605392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742252102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742314999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742335498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:31 functional-457100 dockerd[1443]: time="2024-07-31T21:52:31.742637882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.415081422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417072002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417090602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.417535197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672128885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672286584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672303583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.672947477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.726901345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730139613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730261011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.730471309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818312143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818457541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818474141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:52 functional-457100 dockerd[1443]: time="2024-07-31T21:52:52.818831037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453137809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453247308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453369207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.453956802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505067581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505293379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505473978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:53 functional-457100 dockerd[1443]: time="2024-07-31T21:52:53.505961274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1437]: time="2024-07-31T21:52:59.243892673Z" level=info msg="ignoring event" container=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244152976Z" level=info msg="shim disconnected" id=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244277078Z" level=warning msg="cleaning up after shim disconnected" id=d4cfefd6b9e6afc1bcbccaff888c92cf9d126347c314bc4bcbc7adf32bac7066 namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.244289778Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1437]: time="2024-07-31T21:52:59.430677178Z" level=info msg="ignoring event" container=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.431622388Z" level=info msg="shim disconnected" id=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.432575999Z" level=warning msg="cleaning up after shim disconnected" id=ec3de230e947a4b7a3fe794fe8771a8e9fdae90fe51ca8c7d5f17e5e2f882b7a namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.432599699Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701141282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701647788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.701982891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.702422796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972699298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972883800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.972898900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:52:59 functional-457100 dockerd[1443]: time="2024-07-31T21:52:59.973235604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:40 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.158522051Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366096193Z" level=info msg="shim disconnected" id=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366171993Z" level=warning msg="cleaning up after shim disconnected" id=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.366186393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.366917397Z" level=info msg="ignoring event" container=40bb191cca35507904e1e7ce853ac4a63b59f50668e53c18492332190696bad7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370438217Z" level=info msg="shim disconnected" id=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370526217Z" level=warning msg="cleaning up after shim disconnected" id=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.370560317Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397022963Z" level=info msg="ignoring event" container=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397098863Z" level=info msg="ignoring event" container=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.397129763Z" level=info msg="ignoring event" container=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398294270Z" level=info msg="shim disconnected" id=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400206080Z" level=info msg="ignoring event" container=9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400235180Z" level=info msg="ignoring event" container=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400282481Z" level=info msg="ignoring event" container=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400299081Z" level=info msg="ignoring event" container=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400322781Z" level=info msg="ignoring event" container=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400336981Z" level=info msg="ignoring event" container=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.400357081Z" level=info msg="ignoring event" container=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405352709Z" level=warning msg="cleaning up after shim disconnected" id=ba2dfdeb46e2a4034db25f77bf4db6d738e6e7daa1125101260bb5bd85f513e1 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405527410Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.405518110Z" level=info msg="shim disconnected" id=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.406977618Z" level=warning msg="cleaning up after shim disconnected" id=88bc9cae605602ffb2038d9e82dfa05246770afc36341f956f0660024948156c namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.406992918Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398719672Z" level=info msg="shim disconnected" id=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.409870833Z" level=warning msg="cleaning up after shim disconnected" id=ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.409920234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398696172Z" level=info msg="shim disconnected" id=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.410300636Z" level=warning msg="cleaning up after shim disconnected" id=8f4a11d770e94016ac8b881c22e247cfef9aa7e6894fb626ff15dfabb30586b7 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.410314936Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.398775172Z" level=info msg="shim disconnected" id=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.412827850Z" level=warning msg="cleaning up after shim disconnected" id=45f62d68ad15622d00b82f52b1e5951c92f402e397433f010eb07d0ca3a48cad namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.412918650Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400683383Z" level=info msg="shim disconnected" id=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.416098568Z" level=warning msg="cleaning up after shim disconnected" id=06c280a26f1621296388a39e0fe77a4ef29cad361c7e58c93e582a1a11f2f424 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.416148268Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1437]: time="2024-07-31T21:54:40.432809660Z" level=info msg="ignoring event" container=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400657183Z" level=info msg="shim disconnected" id=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.432721259Z" level=info msg="shim disconnected" id=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.437529786Z" level=warning msg="cleaning up after shim disconnected" id=b15ead25b6f041e17ee4c9b090b44541421416401046cb990a08a16f524fcdb4 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.437674586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.402022290Z" level=info msg="shim disconnected" id=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.444567624Z" level=warning msg="cleaning up after shim disconnected" id=86a41f57da0bdece4ac8af0f25d8ff3c30e01b5ff576bd4221d1f560295da398 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.444581724Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.400770683Z" level=info msg="shim disconnected" id=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.450439057Z" level=warning msg="cleaning up after shim disconnected" id=5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.450487457Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.457761397Z" level=warning msg="cleaning up after shim disconnected" id=ca7c2a0fa7496028ea91a1614239d2ff494f7bff054b7919bc6a844283410888 namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.461575018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.528698587Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:54:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:54:40 functional-457100 dockerd[1443]: time="2024-07-31T21:54:40.553332023Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:54:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.240545103Z" level=info msg="shim disconnected" id=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1437]: time="2024-07-31T21:54:45.241462908Z" level=info msg="ignoring event" container=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.241761310Z" level=warning msg="cleaning up after shim disconnected" id=1fc3088316c03222330796e4a97811f2fb959c426e432c13e8d6735294c74907 namespace=moby
	Jul 31 21:54:45 functional-457100 dockerd[1443]: time="2024-07-31T21:54:45.242634215Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.266303218Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.326406525Z" level=info msg="ignoring event" container=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327561619Z" level=info msg="shim disconnected" id=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327676319Z" level=warning msg="cleaning up after shim disconnected" id=1c93bad17003c4a8d6b8110c849658d32a77c96842be3b026e2e2339f41a10da namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1443]: time="2024-07-31T21:54:50.327690819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408104227Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408389825Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408538125Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:54:50 functional-457100 dockerd[1437]: time="2024-07-31T21:54:50.408581925Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:54:51 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:54:51 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:54:51 functional-457100 systemd[1]: docker.service: Consumed 5.211s CPU time.
	Jul 31 21:54:51 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.462273945Z" level=info msg="Starting up"
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.463306441Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 21:54:51 functional-457100 dockerd[4359]: time="2024-07-31T21:54:51.464273336Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4365
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.491677521Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514730923Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514766823Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514851123Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514876123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514922823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.514936023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515076022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515166022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515202521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515213321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515234621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.515340521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518312108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518442208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518585707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518673307Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518751006Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.518824006Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519144305Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519198005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519230304Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519245104Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519257204Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519301004Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519527603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519649003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519752202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519770102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519820602Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519844802Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519857702Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519873802Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519887502Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519898002Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519908002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519917502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519934301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519946101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519956401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519972901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519987901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.519999101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520010001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520020401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520031001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520042701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520052401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520064501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520074901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520087901Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520104701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520115201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520132601Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520243100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520279600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520291800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520301900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520310100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520321600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520330300Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520565299Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520697898Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520826798Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 21:54:51 functional-457100 dockerd[4365]: time="2024-07-31T21:54:51.520865998Z" level=info msg="containerd successfully booted in 0.030507s"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.509854424Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.544442499Z" level=info msg="Loading containers: start."
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.766225297Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.880382984Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 31 21:54:52 functional-457100 dockerd[4359]: time="2024-07-31T21:54:52.975441441Z" level=info msg="Loading containers: done."
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.002640143Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.002734843Z" level=info msg="Daemon has completed initialization"
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.049225501Z" level=info msg="API listen on [::]:2376"
	Jul 31 21:54:53 functional-457100 systemd[1]: Started Docker Application Container Engine.
	Jul 31 21:54:53 functional-457100 dockerd[4359]: time="2024-07-31T21:54:53.050327898Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.596755794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.596967394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.600546486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.601065685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680141326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680651525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.680770625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.681471624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.787834610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788128709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788238909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.788745008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.801826082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.803337879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.803354379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:55 functional-457100 dockerd[4365]: time="2024-07-31T21:54:55.807862569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.156662041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157102141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157156641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.157568240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455023173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455506272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.455757172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.456589870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545423521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545907920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.545933220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.546077820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.657368733Z" level=info msg="ignoring event" container=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.672943306Z" level=info msg="shim disconnected" id=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.673012506Z" level=warning msg="cleaning up after shim disconnected" id=e895872468f712a3914c5c8474bea6549bd00d9bd4d45f1c1a53c8d6bf0ea9cd namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.673023106Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.720414627Z" level=info msg="ignoring event" container=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.720390327Z" level=info msg="shim disconnected" id=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.721135325Z" level=warning msg="cleaning up after shim disconnected" id=f6c70a5cd836175103ea3255c73ecc51f8f7a03bc4cc9dc01a62254f060ae041 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.721237625Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.734252003Z" level=info msg="ignoring event" container=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735582901Z" level=info msg="shim disconnected" id=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735748001Z" level=warning msg="cleaning up after shim disconnected" id=0c983bd8b69f0b6d75b974f9068e2255121444aa6935bef814a937f119fa8c90 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.735761901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.770324643Z" level=info msg="shim disconnected" id=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.770521742Z" level=info msg="ignoring event" container=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.771116141Z" level=warning msg="cleaning up after shim disconnected" id=476c48aee807677487d6150034a9c7487f33c57aa304b9e84960c520fc6f9038 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.771238441Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825454850Z" level=info msg="shim disconnected" id=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825512350Z" level=warning msg="cleaning up after shim disconnected" id=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.825523050Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.825710850Z" level=info msg="ignoring event" container=d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4359]: time="2024-07-31T21:54:56.957732228Z" level=info msg="ignoring event" container=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958342127Z" level=info msg="shim disconnected" id=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958423726Z" level=warning msg="cleaning up after shim disconnected" id=181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80 namespace=moby
	Jul 31 21:54:56 functional-457100 dockerd[4365]: time="2024-07-31T21:54:56.958446226Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4359]: time="2024-07-31T21:54:57.010679446Z" level=info msg="ignoring event" container=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011517745Z" level=info msg="shim disconnected" id=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011573245Z" level=warning msg="cleaning up after shim disconnected" id=251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048 namespace=moby
	Jul 31 21:54:57 functional-457100 dockerd[4365]: time="2024-07-31T21:54:57.011583945Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.915260843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.916553549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.916865250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.917443053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.977907139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978170841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978187341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:54:59 functional-457100 dockerd[4365]: time="2024-07-31T21:54:59.978621843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.019033834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020195140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020326840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.020713542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084397044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084724345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.084747945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.085373848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.317035345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318630753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318648353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.318868154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519366003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519674205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.519857506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.520226207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.532460165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533365469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533483570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.533656571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.618768874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.618990475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.619552178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:00 functional-457100 dockerd[4365]: time="2024-07-31T21:55:00.619656978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.740274100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.740778002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.741050603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.741447605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.790403637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.790938240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.791136941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.793454752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799421380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799486680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799499780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:04 functional-457100 dockerd[4365]: time="2024-07-31T21:55:04.799611481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325064572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325272173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325432774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.325661875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.403697645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.404083647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.404418748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.405357753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675272733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675642334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.675943436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:55:05 functional-457100 dockerd[4365]: time="2024-07-31T21:55:05.676359838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.106181204Z" level=info msg="Processing signal 'terminated'"
	Jul 31 21:58:43 functional-457100 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.297975932Z" level=info msg="ignoring event" container=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.298919937Z" level=info msg="ignoring event" container=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304623068Z" level=info msg="shim disconnected" id=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304753768Z" level=warning msg="cleaning up after shim disconnected" id=11cf2aabc43fb6ba3e81786e9a812af3da77ce9302429383d603f362b2d89c1a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.304771769Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.305922375Z" level=info msg="shim disconnected" id=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.312090308Z" level=warning msg="cleaning up after shim disconnected" id=35db2de8873303b7cfdccb0768931f8b15bc16eeaa3e37bbbe648f29cb839db2 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.312184808Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.335289432Z" level=info msg="ignoring event" container=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335598134Z" level=info msg="shim disconnected" id=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335668334Z" level=warning msg="cleaning up after shim disconnected" id=1a92960f0ddb13f346235d70a40c176a83d38c6232cd0b1d8eec64c30ccd42e5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.335680634Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.351413219Z" level=info msg="ignoring event" container=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352173623Z" level=info msg="shim disconnected" id=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352345424Z" level=warning msg="cleaning up after shim disconnected" id=df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.352953227Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.397610666Z" level=info msg="ignoring event" container=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.398677672Z" level=info msg="shim disconnected" id=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.398782972Z" level=warning msg="cleaning up after shim disconnected" id=4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.399526776Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.402449292Z" level=info msg="ignoring event" container=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.402467392Z" level=info msg="shim disconnected" id=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.403097396Z" level=warning msg="cleaning up after shim disconnected" id=d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.403243196Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.413937254Z" level=info msg="ignoring event" container=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.414504557Z" level=info msg="shim disconnected" id=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.414900059Z" level=warning msg="cleaning up after shim disconnected" id=d8821f7263d96a786ef8206e32f40955c17926f97723f5d59c534e1d0ea6283b namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.415026660Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.429858739Z" level=info msg="ignoring event" container=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.429971840Z" level=info msg="ignoring event" container=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.430056440Z" level=info msg="ignoring event" container=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430200841Z" level=info msg="shim disconnected" id=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430412942Z" level=warning msg="cleaning up after shim disconnected" id=0be1c9fcc08f299197ce7daf4e4178234dc9f1f65649893907dc9c539b9dbe83 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.430689343Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.442089105Z" level=info msg="shim disconnected" id=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.442229705Z" level=info msg="ignoring event" container=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447102231Z" level=warning msg="cleaning up after shim disconnected" id=bedf1cdfe8b444c4392fbac24bbab4ed38ed25b275a11a44f2c2768f51239f4a namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447257932Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447025231Z" level=info msg="shim disconnected" id=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.459716799Z" level=warning msg="cleaning up after shim disconnected" id=2e01279176ae8b0a973b9fac8e94cd6370666f4a8fa5a75e87106b97823b6a2d namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.459728099Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.447081731Z" level=info msg="shim disconnected" id=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.462555614Z" level=warning msg="cleaning up after shim disconnected" id=177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.462567814Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4359]: time="2024-07-31T21:58:43.476556089Z" level=info msg="ignoring event" container=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.476821491Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:58:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479748706Z" level=info msg="shim disconnected" id=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479845307Z" level=warning msg="cleaning up after shim disconnected" id=bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6 namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.479857707Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:43 functional-457100 dockerd[4365]: time="2024-07-31T21:58:43.574091512Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:58:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4359]: time="2024-07-31T21:58:48.222487529Z" level=info msg="ignoring event" container=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222702130Z" level=info msg="shim disconnected" id=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222762830Z" level=warning msg="cleaning up after shim disconnected" id=0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935 namespace=moby
	Jul 31 21:58:48 functional-457100 dockerd[4365]: time="2024-07-31T21:58:48.222834130Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.253468471Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.289266541Z" level=info msg="ignoring event" container=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.291442308Z" level=info msg="shim disconnected" id=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.291783303Z" level=warning msg="cleaning up after shim disconnected" id=483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4365]: time="2024-07-31T21:58:53.292023000Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360036592Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360455786Z" level=info msg="Daemon shutdown complete"
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360615283Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 21:58:53 functional-457100 dockerd[4359]: time="2024-07-31T21:58:53.360656983Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 21:58:54 functional-457100 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 21:58:54 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 21:58:54 functional-457100 systemd[1]: docker.service: Consumed 9.040s CPU time.
	Jul 31 21:58:54 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 21:58:54 functional-457100 dockerd[8563]: time="2024-07-31T21:58:54.414155271Z" level=info msg="Starting up"
	Jul 31 21:59:54 functional-457100 dockerd[8563]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 31 21:59:54 functional-457100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 31 21:59:54 functional-457100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 31 21:59:54 functional-457100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0731 21:59:54.516331    5528 out.go:239] * 
	W0731 21:59:54.518709    5528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:59:54.523834    5528 out.go:177] 
	
	
	==> Docker <==
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID '4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID '0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935'"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="error getting RW layer size for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:02:55 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:02:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6'"
	Jul 31 22:02:55 functional-457100 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jul 31 22:02:55 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 22:02:55 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-31T22:02:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +14.376143] systemd-fstab-generator[2513]: Ignoring "noauto" option for root device
	[  +0.211673] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.417454] kauditd_printk_skb: 88 callbacks suppressed
	[Jul31 21:53] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 21:54] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[  +0.619127] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +0.228265] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.262951] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +5.286858] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.030492] systemd-fstab-generator[4579]: Ignoring "noauto" option for root device
	[  +0.188751] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +0.206776] systemd-fstab-generator[4603]: Ignoring "noauto" option for root device
	[  +0.260917] systemd-fstab-generator[4618]: Ignoring "noauto" option for root device
	[  +0.888960] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.868668] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.066299] systemd-fstab-generator[5471]: Ignoring "noauto" option for root device
	[Jul31 21:55] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.064901] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.363300] systemd-fstab-generator[6438]: Ignoring "noauto" option for root device
	[Jul31 21:58] systemd-fstab-generator[8096]: Ignoring "noauto" option for root device
	[  +0.177086] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.477885] systemd-fstab-generator[8132]: Ignoring "noauto" option for root device
	[  +0.260145] systemd-fstab-generator[8145]: Ignoring "noauto" option for root device
	[  +0.286174] systemd-fstab-generator[8159]: Ignoring "noauto" option for root device
	[  +5.318283] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 22:03:55 up 13 min,  0 users,  load average: 0.01, 0.16, 0.15
	Linux functional-457100 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 22:03:49 functional-457100 kubelet[5478]: I0731 22:03:49.085621    5478 status_manager.go:853] "Failed to get status for pod" podUID="b1f21da6d6d77b6662df523b7b4dbe14" pod="kube-system/kube-apiserver-functional-457100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:03:51 functional-457100 kubelet[5478]: E0731 22:03:51.442024    5478 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 5m8.866019486s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 31 22:03:54 functional-457100 kubelet[5478]: E0731 22:03:54.754182    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?resourceVersion=0&timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:03:54 functional-457100 kubelet[5478]: E0731 22:03:54.754692    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:03:54 functional-457100 kubelet[5478]: E0731 22:03:54.755853    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:03:54 functional-457100 kubelet[5478]: E0731 22:03:54.756851    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:03:54 functional-457100 kubelet[5478]: E0731 22:03:54.757932    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:03:54 functional-457100 kubelet[5478]: E0731 22:03:54.758026    5478 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.089050    5478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused" interval="7s"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.594340    5478 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.594434    5478 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.594509    5478 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.596876    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.596947    5478 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.596974    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.597060    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.597987    5478 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.598017    5478 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: I0731 22:03:55.598078    5478 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.597930    5478 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.598239    5478 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.598433    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.598514    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.599025    5478 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 31 22:03:55 functional-457100 kubelet[5478]: E0731 22:03:55.599289    5478 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:02:20.848746    6364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0731 22:02:55.232440    6364 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:02:55.262241    6364 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:02:55.292215    6364 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:02:55.321206    6364 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:02:55.351305    6364 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:02:55.383059    6364 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:02:55.415599    6364 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:02:55.452323    6364 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: exit status 2 (11.3726787s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:03:56.291913    4588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-457100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (120.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-457100 apply -f testdata\invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-457100 apply -f testdata\invalidsvc.yaml: exit status 1 (4.2228911s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://172.17.30.24:8441/openapi/v2?timeout=32s": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-457100 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 config unset cpus
functional_test.go:1210: expected config error for "out/minikube-windows-amd64.exe -p functional-457100 config unset cpus" to be -""- but got *"W0731 22:10:01.657281    4336 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 config get cpus: exit status 14 (274.4289ms)

                                                
                                                
** stderr ** 
	W0731 22:10:01.979648    7244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1210: expected config error for "out/minikube-windows-amd64.exe -p functional-457100 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0731 22:10:01.979648    7244 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 config set cpus 2
functional_test.go:1210: expected config error for "out/minikube-windows-amd64.exe -p functional-457100 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0731 22:10:02.249450   12808 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 config get cpus
functional_test.go:1210: expected config error for "out/minikube-windows-amd64.exe -p functional-457100 config get cpus" to be -""- but got *"W0731 22:10:02.523539    4324 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 config unset cpus
functional_test.go:1210: expected config error for "out/minikube-windows-amd64.exe -p functional-457100 config unset cpus" to be -""- but got *"W0731 22:10:02.810740      32 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 config get cpus: exit status 14 (288.8189ms)

                                                
                                                
** stderr ** 
	W0731 22:10:03.094669    3344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1210: expected config error for "out/minikube-windows-amd64.exe -p functional-457100 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0731 22:10:03.094669    3344 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (309.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 status: exit status 2 (14.2914749s)

                                                
                                                
-- stdout --
	functional-457100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:01.660555    8888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:856: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-457100 status" : exit status 2
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (12.8098016s)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:15.922814    3344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-457100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 status -o json: exit status 2 (12.5671607s)

                                                
                                                
-- stdout --
	{"Name":"functional-457100","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:28.733960    7244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-457100 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100: exit status 2 (12.2062781s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:41.304407    2520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs -n 25: (4m5.3284344s)
helpers_test.go:252: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| ssh     | functional-457100 ssh sudo                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-457100                                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh                                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-457100 cache reload                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| ssh     | functional-457100 ssh                                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-457100 kubectl --                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	|         | --context functional-457100                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:57 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	| config  | functional-457100 config unset                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config set                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus 2                                                                   |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config unset                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --dry-run --memory                                                       |                   |                   |         |                     |                     |
	|         | 250MB --alsologtostderr                                                  |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| service | functional-457100 service list                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --dry-run --memory                                                       |                   |                   |         |                     |                     |
	|         | 250MB --alsologtostderr                                                  |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| service | functional-457100 service list                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | -o json                                                                  |                   |                   |         |                     |                     |
	| service | functional-457100 service                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --namespace=default --https                                              |                   |                   |         |                     |                     |
	|         | --url hello-node                                                         |                   |                   |         |                     |                     |
	| service | functional-457100                                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | service hello-node --url                                                 |                   |                   |         |                     |                     |
	|         | --format={{.IP}}                                                         |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh echo                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | hello                                                                    |                   |                   |         |                     |                     |
	| service | functional-457100 service                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | hello-node --url                                                         |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh cat                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | /etc/hostname                                                            |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:10:08
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:10:08.521737    5800 out.go:291] Setting OutFile to fd 1080 ...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.522738    5800 out.go:304] Setting ErrFile to fd 1096...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.556931    5800 out.go:298] Setting JSON to false
	I0731 22:10:08.562892    5800 start.go:129] hostinfo: {"hostname":"minikube6","uptime":539750,"bootTime":1721924058,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:10:08.562892    5800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:10:08.569887    5800 out.go:177] * [functional-457100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:10:08.574255    5800 notify.go:220] Checking for updates...
	I0731 22:10:08.578803    5800 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:10:08.582305    5800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:10:08.586191    5800 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:10:08.593286    5800 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:10:08.596569    5800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5'"
	Jul 31 22:13:58 functional-457100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID '251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error getting RW layer size for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6'"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:13:58Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 31 22:13:58 functional-457100 cri-dockerd[4630]: W0731 22:13:58.110870    4630 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-31T22:13:58Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +14.376143] systemd-fstab-generator[2513]: Ignoring "noauto" option for root device
	[  +0.211673] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.417454] kauditd_printk_skb: 88 callbacks suppressed
	[Jul31 21:53] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 21:54] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[  +0.619127] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +0.228265] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.262951] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +5.286858] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.030492] systemd-fstab-generator[4579]: Ignoring "noauto" option for root device
	[  +0.188751] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +0.206776] systemd-fstab-generator[4603]: Ignoring "noauto" option for root device
	[  +0.260917] systemd-fstab-generator[4618]: Ignoring "noauto" option for root device
	[  +0.888960] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.868668] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.066299] systemd-fstab-generator[5471]: Ignoring "noauto" option for root device
	[Jul31 21:55] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.064901] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.363300] systemd-fstab-generator[6438]: Ignoring "noauto" option for root device
	[Jul31 21:58] systemd-fstab-generator[8096]: Ignoring "noauto" option for root device
	[  +0.177086] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.477885] systemd-fstab-generator[8132]: Ignoring "noauto" option for root device
	[  +0.260145] systemd-fstab-generator[8145]: Ignoring "noauto" option for root device
	[  +0.286174] systemd-fstab-generator[8159]: Ignoring "noauto" option for root device
	[  +5.318283] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 22:14:58 up 24 min,  0 users,  load average: 0.06, 0.03, 0.07
	Linux functional-457100 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 22:14:48 functional-457100 kubelet[5478]: E0731 22:14:48.712951    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:14:48 functional-457100 kubelet[5478]: E0731 22:14:48.713872    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:14:48 functional-457100 kubelet[5478]: E0731 22:14:48.715144    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:14:48 functional-457100 kubelet[5478]: E0731 22:14:48.715238    5478 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 31 22:14:49 functional-457100 kubelet[5478]: I0731 22:14:49.084708    5478 status_manager.go:853] "Failed to get status for pod" podUID="b1f21da6d6d77b6662df523b7b4dbe14" pod="kube-system/kube-apiserver-functional-457100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:14:51 functional-457100 kubelet[5478]: E0731 22:14:51.561173    5478 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 16m8.985147083s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 31 22:14:53 functional-457100 kubelet[5478]: E0731 22:14:53.309348    5478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused" interval="7s"
	Jul 31 22:14:55 functional-457100 kubelet[5478]: E0731 22:14:55.056322    5478 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.17.30.24:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-functional-457100.17e76b173e9f6555  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-functional-457100,UID:59d82ef16a559d1bfc9b28786e5577d7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/healthz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-457100,},FirstTimestamp:2024-07-31 21:58:50.071557461 +0000 UTC m=+231.196673772,LastTimestamp:2024-07-31 21:58:50.071557461 +0000 UTC m=+231.196673772
,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-457100,}"
	Jul 31 22:14:56 functional-457100 kubelet[5478]: E0731 22:14:56.561746    5478 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 16m13.985737515s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.341081    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.341498    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342022    5478 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342056    5478 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342132    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342259    5478 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342345    5478 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342378    5478 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342430    5478 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342547    5478 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342697    5478 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.342723    5478 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: I0731 22:14:58.343001    5478 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.345397    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.345844    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:14:58 functional-457100 kubelet[5478]: E0731 22:14:58.347760    5478 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:53.509121    4488 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0731 22:11:57.602507    4488 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:11:57.635525    4488 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:11:57.695626    4488 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:12:57.805436    4488 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:12:57.849936    4488 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:12:57.910696    4488 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:12:57.944343    4488 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:13:58.089939    4488 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: exit status 2 (12.0747168s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:14:58.860898    2396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-457100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (309.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (291.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-457100 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context functional-457100 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.1851478s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.17.30.24:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-457100 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-457100 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-457100 describe po hello-node-connect: exit status 1 (2.1961092s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-457100 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-457100 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-457100 logs -l app=hello-node-connect: exit status 1 (2.1933362s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-457100 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-457100 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-457100 describe svc hello-node-connect: exit status 1 (2.1849054s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-457100 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100: exit status 2 (11.5540528s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:17:30.499038    3700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs -n 25
E0731 22:17:53.121664   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs -n 25: (4m18.8625633s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| service    | functional-457100 service                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|            | hello-node --url                                                      |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh cat                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|            | /etc/hostname                                                         |                   |                   |         |                     |                     |
	| tunnel     | functional-457100 tunnel                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-457100 tunnel                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-457100 tunnel                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| license    |                                                                       | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:11 UTC |
	| ssh        | functional-457100 ssh sudo                                            | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC |                     |
	|            | systemctl is-active crio                                              |                   |                   |         |                     |                     |
	| image      | functional-457100 image load --daemon                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:11 UTC |
	|            | kicbase/echo-server:functional-457100                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image      | functional-457100 image ls                                            | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:12 UTC |
	| image      | functional-457100 image load --daemon                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:12 UTC | 31 Jul 24 22:13 UTC |
	|            | kicbase/echo-server:functional-457100                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image      | functional-457100 image ls                                            | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:13 UTC | 31 Jul 24 22:14 UTC |
	| image      | functional-457100 image load --daemon                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:14 UTC | 31 Jul 24 22:15 UTC |
	|            | kicbase/echo-server:functional-457100                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| dashboard  | --url --port 36195                                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC |                     |
	|            | -p functional-457100                                                  |                   |                   |         |                     |                     |
	|            | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:15 UTC |
	|            | /etc/ssl/certs/12332.pem                                              |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:15 UTC |
	|            | /usr/share/ca-certificates/12332.pem                                  |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:15 UTC |
	|            | /etc/ssl/certs/51391683.0                                             |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:15 UTC |
	|            | /etc/ssl/certs/123322.pem                                             |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:16 UTC |
	|            | /usr/share/ca-certificates/123322.pem                                 |                   |                   |         |                     |                     |
	| image      | functional-457100 image ls                                            | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:16 UTC |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:16 UTC | 31 Jul 24 22:16 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0                                             |                   |                   |         |                     |                     |
	| docker-env | functional-457100 docker-env                                          | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:16 UTC |                     |
	| image      | functional-457100 image save kicbase/echo-server:functional-457100    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:16 UTC |                     |
	|            | C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:17 UTC | 31 Jul 24 22:17 UTC |
	|            | /etc/test/nested/copy/12332/hosts                                     |                   |                   |         |                     |                     |
	| addons     | functional-457100 addons list                                         | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:17 UTC | 31 Jul 24 22:17 UTC |
	| addons     | functional-457100 addons list                                         | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:17 UTC | 31 Jul 24 22:17 UTC |
	|            | -o json                                                               |                   |                   |         |                     |                     |
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:10:08
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:10:08.521737    5800 out.go:291] Setting OutFile to fd 1080 ...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.522738    5800 out.go:304] Setting ErrFile to fd 1096...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.556931    5800 out.go:298] Setting JSON to false
	I0731 22:10:08.562892    5800 start.go:129] hostinfo: {"hostname":"minikube6","uptime":539750,"bootTime":1721924058,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:10:08.562892    5800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:10:08.569887    5800 out.go:177] * [functional-457100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:10:08.574255    5800 notify.go:220] Checking for updates...
	I0731 22:10:08.578803    5800 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:10:08.582305    5800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:10:08.586191    5800 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:10:08.593286    5800 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:10:08.596569    5800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Jul 31 22:20:59 functional-457100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID '4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7'"
	Jul 31 22:20:59 functional-457100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 31 22:20:59 functional-457100 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID '251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '251a8872b9d7cca315362d30a51d92fbf78c642fc1b36ae8be2149b4cd986048'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID 'd1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1049ec04e6b06b6274eba1bc86a0315753e8b127a13cc6dfd57c2ef80330c3a'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID '181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80'"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="error getting RW layer size for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:20:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-31T22:21:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +5.286858] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.030492] systemd-fstab-generator[4579]: Ignoring "noauto" option for root device
	[  +0.188751] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +0.206776] systemd-fstab-generator[4603]: Ignoring "noauto" option for root device
	[  +0.260917] systemd-fstab-generator[4618]: Ignoring "noauto" option for root device
	[  +0.888960] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.868668] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.066299] systemd-fstab-generator[5471]: Ignoring "noauto" option for root device
	[Jul31 21:55] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.064901] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.363300] systemd-fstab-generator[6438]: Ignoring "noauto" option for root device
	[Jul31 21:58] systemd-fstab-generator[8096]: Ignoring "noauto" option for root device
	[  +0.177086] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.477885] systemd-fstab-generator[8132]: Ignoring "noauto" option for root device
	[  +0.260145] systemd-fstab-generator[8145]: Ignoring "noauto" option for root device
	[  +0.286174] systemd-fstab-generator[8159]: Ignoring "noauto" option for root device
	[  +5.318283] kauditd_printk_skb: 89 callbacks suppressed
	[Jul31 22:16] systemd-fstab-generator[13880]: Ignoring "noauto" option for root device
	[ +38.866981] systemd-fstab-generator[14030]: Ignoring "noauto" option for root device
	[  +0.152158] kauditd_printk_skb: 12 callbacks suppressed
	[Jul31 22:17] hrtimer: interrupt took 2259229 ns
	[Jul31 22:20] systemd-fstab-generator[15275]: Ignoring "noauto" option for root device
	[  +0.181933] kauditd_printk_skb: 12 callbacks suppressed
	[Jul31 22:22] systemd-fstab-generator[15673]: Ignoring "noauto" option for root device
	[  +0.133823] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:22:00 up 31 min,  0 users,  load average: 0.30, 0.14, 0.09
	Linux functional-457100 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 22:21:57 functional-457100 kubelet[5478]: E0731 22:21:57.545941    5478 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-457100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:21:57 functional-457100 kubelet[5478]: E0731 22:21:57.546073    5478 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 31 22:21:59 functional-457100 kubelet[5478]: I0731 22:21:59.087215    5478 status_manager.go:853] "Failed to get status for pod" podUID="b1f21da6d6d77b6662df523b7b4dbe14" pod="kube-system/kube-apiserver-functional-457100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:21:59 functional-457100 kubelet[5478]: E0731 22:21:59.144131    5478 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:21:59 functional-457100 kubelet[5478]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:21:59 functional-457100 kubelet[5478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:21:59 functional-457100 kubelet[5478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:21:59 functional-457100 kubelet[5478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.382141    5478 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.389704    5478 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.390002    5478 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.390225    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.390300    5478 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.390328    5478 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.390407    5478 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: I0731 22:22:00.390466    5478 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.390602    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.390675    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.392018    5478 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.392184    5478 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.392251    5478 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.393937    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.394084    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.395347    5478 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 31 22:22:00 functional-457100 kubelet[5478]: E0731 22:22:00.452760    5478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:17:42.039816   10852 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0731 22:17:59.291740   10852 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:18:59.414904   10852 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:18:59.477770   10852 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:18:59.509599   10852 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:19:59.648222   10852 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:19:59.686908   10852 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:19:59.740372   10852 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:20:59.880872   10852 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: exit status 2 (12.1392209s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:22:00.921446   11404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-457100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (291.38s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (418.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
E0731 22:12:53.118365   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.17.30.24:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": context deadline exceeded
functional_test_pvc_test.go:44: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
functional_test_pvc_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: exit status 2 (11.9637296s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:14:13.440493   11668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test_pvc_test.go:44: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:44: "functional-457100" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100: exit status 2 (11.8996102s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:14:25.403441    8220 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs -n 25: (2m22.2768412s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:57 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	| config  | functional-457100 config unset                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config set                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus 2                                                                   |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config unset                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --dry-run --memory                                                       |                   |                   |         |                     |                     |
	|         | 250MB --alsologtostderr                                                  |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| service | functional-457100 service list                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --dry-run --memory                                                       |                   |                   |         |                     |                     |
	|         | 250MB --alsologtostderr                                                  |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| service | functional-457100 service list                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | -o json                                                                  |                   |                   |         |                     |                     |
	| service | functional-457100 service                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --namespace=default --https                                              |                   |                   |         |                     |                     |
	|         | --url hello-node                                                         |                   |                   |         |                     |                     |
	| service | functional-457100                                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | service hello-node --url                                                 |                   |                   |         |                     |                     |
	|         | --format={{.IP}}                                                         |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh echo                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | hello                                                                    |                   |                   |         |                     |                     |
	| service | functional-457100 service                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | hello-node --url                                                         |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh cat                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | /etc/hostname                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-457100 tunnel                                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| tunnel  | functional-457100 tunnel                                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| tunnel  | functional-457100 tunnel                                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| license |                                                                          | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:11 UTC |
	| ssh     | functional-457100 ssh sudo                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC |                     |
	|         | systemctl is-active crio                                                 |                   |                   |         |                     |                     |
	| image   | functional-457100 image load --daemon                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:11 UTC |
	|         | kicbase/echo-server:functional-457100                                    |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| image   | functional-457100 image ls                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:12 UTC |
	| image   | functional-457100 image load --daemon                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:12 UTC | 31 Jul 24 22:13 UTC |
	|         | kicbase/echo-server:functional-457100                                    |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| image   | functional-457100 image ls                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:13 UTC |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:10:08
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:10:08.521737    5800 out.go:291] Setting OutFile to fd 1080 ...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.522738    5800 out.go:304] Setting ErrFile to fd 1096...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.556931    5800 out.go:298] Setting JSON to false
	I0731 22:10:08.562892    5800 start.go:129] hostinfo: {"hostname":"minikube6","uptime":539750,"bootTime":1721924058,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:10:08.562892    5800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:10:08.569887    5800 out.go:177] * [functional-457100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:10:08.574255    5800 notify.go:220] Checking for updates...
	I0731 22:10:08.578803    5800 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:10:08.582305    5800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:10:08.586191    5800 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:10:08.593286    5800 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:10:08.596569    5800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5'"
	Jul 31 22:15:58 functional-457100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 31 22:15:58 functional-457100 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jul 31 22:15:58 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 22:15:58 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-31T22:16:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 21:53] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 21:54] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[  +0.619127] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +0.228265] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.262951] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +5.286858] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.030492] systemd-fstab-generator[4579]: Ignoring "noauto" option for root device
	[  +0.188751] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +0.206776] systemd-fstab-generator[4603]: Ignoring "noauto" option for root device
	[  +0.260917] systemd-fstab-generator[4618]: Ignoring "noauto" option for root device
	[  +0.888960] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.868668] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.066299] systemd-fstab-generator[5471]: Ignoring "noauto" option for root device
	[Jul31 21:55] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.064901] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.363300] systemd-fstab-generator[6438]: Ignoring "noauto" option for root device
	[Jul31 21:58] systemd-fstab-generator[8096]: Ignoring "noauto" option for root device
	[  +0.177086] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.477885] systemd-fstab-generator[8132]: Ignoring "noauto" option for root device
	[  +0.260145] systemd-fstab-generator[8145]: Ignoring "noauto" option for root device
	[  +0.286174] systemd-fstab-generator[8159]: Ignoring "noauto" option for root device
	[  +5.318283] kauditd_printk_skb: 89 callbacks suppressed
	[Jul31 22:16] systemd-fstab-generator[13880]: Ignoring "noauto" option for root device
	[ +38.866981] systemd-fstab-generator[14030]: Ignoring "noauto" option for root device
	[  +0.152158] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:16:59 up 26 min,  0 users,  load average: 0.01, 0.02, 0.05
	Linux functional-457100 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 22:16:56 functional-457100 kubelet[5478]: E0731 22:16:56.274148    5478 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.17.30.24:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-457100.17e76b1745d26196  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-457100,UID:38f507eb43fb4e3716aa01cd3d32cec7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://127.0.0.1:2381/health?exclude=NOSPACE&serializable=true\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-457100,},FirstTimestamp:2024-07-31 21:58:50.19233935 +0000 UTC m=+231.317455561,LastTimestamp:2024-07-31 21:58:50.19233935 +0000 UTC m=+231.317455561,C
ount:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-457100,}"
	Jul 31 22:16:56 functional-457100 kubelet[5478]: E0731 22:16:56.585266    5478 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 18m14.009259134s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962164    5478 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962300    5478 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: I0731 22:16:58.962565    5478 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962747    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962783    5478 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962881    5478 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963370    5478 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963418    5478 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963442    5478 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963458    5478 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963517    5478 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.967301    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.968106    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.971300    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.971612    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.973963    5478 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 31 22:16:59 functional-457100 kubelet[5478]: I0731 22:16:59.085617    5478 status_manager.go:853] "Failed to get status for pod" podUID="b1f21da6d6d77b6662df523b7b4dbe14" pod="kube-system/kube-apiserver-functional-457100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:16:59 functional-457100 kubelet[5478]: E0731 22:16:59.154856    5478 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:16:59 functional-457100 kubelet[5478]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:16:59 functional-457100 kubelet[5478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:16:59 functional-457100 kubelet[5478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:16:59 functional-457100 kubelet[5478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:16:59 functional-457100 kubelet[5478]: E0731 22:16:59.349777    5478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:14:37.303159   12432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0731 22:14:58.324738   12432 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:14:58.378103   12432 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:14:58.418947   12432 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:14:58.480039   12432 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.630743   12432 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.694759   12432 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.745801   12432 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.780765   12432 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: exit status 2 (12.5786038s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:16:59.611139   12140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-457100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (418.75s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (239.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-457100 replace --force -f testdata\mysql.yaml
functional_test.go:1793: (dbg) Non-zero exit: kubectl --context functional-457100 replace --force -f testdata\mysql.yaml: exit status 1 (4.2486987s)

                                                
                                                
** stderr ** 
	error when deleting "testdata\\mysql.yaml": Delete "https://172.17.30.24:8441/api/v1/namespaces/default/services/mysql": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
	error when deleting "testdata\\mysql.yaml": Delete "https://172.17.30.24:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1795: failed to kubectl replace mysql: args "kubectl --context functional-457100 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100: exit status 2 (11.9094047s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:17:17.165040    8016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs -n 25: (3m31.4264481s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| service    | functional-457100 service                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|            | hello-node --url                                                      |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh cat                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|            | /etc/hostname                                                         |                   |                   |         |                     |                     |
	| tunnel     | functional-457100 tunnel                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-457100 tunnel                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| tunnel     | functional-457100 tunnel                                              | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| license    |                                                                       | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:11 UTC |
	| ssh        | functional-457100 ssh sudo                                            | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC |                     |
	|            | systemctl is-active crio                                              |                   |                   |         |                     |                     |
	| image      | functional-457100 image load --daemon                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:11 UTC |
	|            | kicbase/echo-server:functional-457100                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image      | functional-457100 image ls                                            | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:12 UTC |
	| image      | functional-457100 image load --daemon                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:12 UTC | 31 Jul 24 22:13 UTC |
	|            | kicbase/echo-server:functional-457100                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| image      | functional-457100 image ls                                            | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:13 UTC | 31 Jul 24 22:14 UTC |
	| image      | functional-457100 image load --daemon                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:14 UTC | 31 Jul 24 22:15 UTC |
	|            | kicbase/echo-server:functional-457100                                 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| dashboard  | --url --port 36195                                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC |                     |
	|            | -p functional-457100                                                  |                   |                   |         |                     |                     |
	|            | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:15 UTC |
	|            | /etc/ssl/certs/12332.pem                                              |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:15 UTC |
	|            | /usr/share/ca-certificates/12332.pem                                  |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:15 UTC |
	|            | /etc/ssl/certs/51391683.0                                             |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:15 UTC |
	|            | /etc/ssl/certs/123322.pem                                             |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:16 UTC |
	|            | /usr/share/ca-certificates/123322.pem                                 |                   |                   |         |                     |                     |
	| image      | functional-457100 image ls                                            | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:15 UTC | 31 Jul 24 22:16 UTC |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:16 UTC | 31 Jul 24 22:16 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0                                             |                   |                   |         |                     |                     |
	| docker-env | functional-457100 docker-env                                          | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:16 UTC |                     |
	| image      | functional-457100 image save kicbase/echo-server:functional-457100    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:16 UTC |                     |
	|            | C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                     |                   |                   |         |                     |                     |
	| ssh        | functional-457100 ssh sudo cat                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:17 UTC | 31 Jul 24 22:17 UTC |
	|            | /etc/test/nested/copy/12332/hosts                                     |                   |                   |         |                     |                     |
	| addons     | functional-457100 addons list                                         | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:17 UTC | 31 Jul 24 22:17 UTC |
	| addons     | functional-457100 addons list                                         | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:17 UTC | 31 Jul 24 22:17 UTC |
	|            | -o json                                                               |                   |                   |         |                     |                     |
	|------------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:10:08
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:10:08.521737    5800 out.go:291] Setting OutFile to fd 1080 ...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.522738    5800 out.go:304] Setting ErrFile to fd 1096...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.556931    5800 out.go:298] Setting JSON to false
	I0731 22:10:08.562892    5800 start.go:129] hostinfo: {"hostname":"minikube6","uptime":539750,"bootTime":1721924058,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:10:08.562892    5800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:10:08.569887    5800 out.go:177] * [functional-457100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:10:08.574255    5800 notify.go:220] Checking for updates...
	I0731 22:10:08.578803    5800 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:10:08.582305    5800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:10:08.586191    5800 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:10:08.593286    5800 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:10:08.596569    5800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID '4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4516e9ce4adce2ff0ba3f6356f04525f2214374be912bcbdaac255363677a5c7'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID '181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '181a7bb8b9a5ccd6a72a7b24afdbfb16fc678517fd984708f316db0b5574bf80'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID '0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1'"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="error getting RW layer size for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:19:59 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:19:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792'"
	Jul 31 22:19:59 functional-457100 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 31 22:19:59 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 22:19:59 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-31T22:20:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.228265] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.262951] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +5.286858] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.030492] systemd-fstab-generator[4579]: Ignoring "noauto" option for root device
	[  +0.188751] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +0.206776] systemd-fstab-generator[4603]: Ignoring "noauto" option for root device
	[  +0.260917] systemd-fstab-generator[4618]: Ignoring "noauto" option for root device
	[  +0.888960] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.868668] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.066299] systemd-fstab-generator[5471]: Ignoring "noauto" option for root device
	[Jul31 21:55] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.064901] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.363300] systemd-fstab-generator[6438]: Ignoring "noauto" option for root device
	[Jul31 21:58] systemd-fstab-generator[8096]: Ignoring "noauto" option for root device
	[  +0.177086] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.477885] systemd-fstab-generator[8132]: Ignoring "noauto" option for root device
	[  +0.260145] systemd-fstab-generator[8145]: Ignoring "noauto" option for root device
	[  +0.286174] systemd-fstab-generator[8159]: Ignoring "noauto" option for root device
	[  +5.318283] kauditd_printk_skb: 89 callbacks suppressed
	[Jul31 22:16] systemd-fstab-generator[13880]: Ignoring "noauto" option for root device
	[ +38.866981] systemd-fstab-generator[14030]: Ignoring "noauto" option for root device
	[  +0.152158] kauditd_printk_skb: 12 callbacks suppressed
	[Jul31 22:17] hrtimer: interrupt took 2259229 ns
	[Jul31 22:20] systemd-fstab-generator[15275]: Ignoring "noauto" option for root device
	[  +0.181933] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:21:00 up 30 min,  0 users,  load average: 0.57, 0.14, 0.08
	Linux functional-457100 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 22:20:59 functional-457100 kubelet[5478]: I0731 22:20:59.085560    5478 status_manager.go:853] "Failed to get status for pod" podUID="b1f21da6d6d77b6662df523b7b4dbe14" pod="kube-system/kube-apiserver-functional-457100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.142537    5478 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:20:59 functional-457100 kubelet[5478]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:20:59 functional-457100 kubelet[5478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:20:59 functional-457100 kubelet[5478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:20:59 functional-457100 kubelet[5478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.901999    5478 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.902201    5478 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: I0731 22:20:59.902238    5478 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.904446    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.904532    5478 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.904561    5478 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.904642    5478 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: I0731 22:20:59.904658    5478 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.904741    5478 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.905570    5478 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.906044    5478 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.906190    5478 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.906310    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.906396    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.907582    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.907886    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.908277    5478 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.908635    5478 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:20:59 functional-457100 kubelet[5478]: E0731 22:20:59.908910    5478 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:17:29.070645    4408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0731 22:17:59.283849    4408 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:18:59.401480    4408 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:18:59.446136    4408 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:18:59.494603    4408 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:18:59.532701    4408 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:19:59.650546    4408 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:19:59.686412    4408 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:19:59.737496    4408 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: exit status 2 (11.6377836s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:21:00.517831    5180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-457100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (239.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (181.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-457100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-457100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (2.1813334s)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-457100 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-457100 -n functional-457100: exit status 2 (11.8613667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:14:12.722144    8796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs -n 25: (2m35.007439s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:57 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	| config  | functional-457100 config unset                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config set                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus 2                                                                   |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config unset                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| config  | functional-457100 config get                                             | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | cpus                                                                     |                   |                   |         |                     |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --dry-run --memory                                                       |                   |                   |         |                     |                     |
	|         | 250MB --alsologtostderr                                                  |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| service | functional-457100 service list                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	| start   | -p functional-457100                                                     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --dry-run --memory                                                       |                   |                   |         |                     |                     |
	|         | 250MB --alsologtostderr                                                  |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| service | functional-457100 service list                                           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | -o json                                                                  |                   |                   |         |                     |                     |
	| service | functional-457100 service                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --namespace=default --https                                              |                   |                   |         |                     |                     |
	|         | --url hello-node                                                         |                   |                   |         |                     |                     |
	| service | functional-457100                                                        | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | service hello-node --url                                                 |                   |                   |         |                     |                     |
	|         | --format={{.IP}}                                                         |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh echo                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | hello                                                                    |                   |                   |         |                     |                     |
	| service | functional-457100 service                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | hello-node --url                                                         |                   |                   |         |                     |                     |
	| ssh     | functional-457100 ssh cat                                                | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC | 31 Jul 24 22:10 UTC |
	|         | /etc/hostname                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-457100 tunnel                                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| tunnel  | functional-457100 tunnel                                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:10 UTC |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| tunnel  | functional-457100 tunnel                                                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| license |                                                                          | minikube          | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:11 UTC |
	| ssh     | functional-457100 ssh sudo                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC |                     |
	|         | systemctl is-active crio                                                 |                   |                   |         |                     |                     |
	| image   | functional-457100 image load --daemon                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:11 UTC |
	|         | kicbase/echo-server:functional-457100                                    |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| image   | functional-457100 image ls                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:11 UTC | 31 Jul 24 22:12 UTC |
	| image   | functional-457100 image load --daemon                                    | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:12 UTC | 31 Jul 24 22:13 UTC |
	|         | kicbase/echo-server:functional-457100                                    |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                        |                   |                   |         |                     |                     |
	| image   | functional-457100 image ls                                               | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:13 UTC |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:10:08
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:10:08.521737    5800 out.go:291] Setting OutFile to fd 1080 ...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.522738    5800 out.go:304] Setting ErrFile to fd 1096...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.556931    5800 out.go:298] Setting JSON to false
	I0731 22:10:08.562892    5800 start.go:129] hostinfo: {"hostname":"minikube6","uptime":539750,"bootTime":1721924058,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:10:08.562892    5800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:10:08.569887    5800 out.go:177] * [functional-457100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:10:08.574255    5800 notify.go:220] Checking for updates...
	I0731 22:10:08.578803    5800 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:10:08.582305    5800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:10:08.586191    5800 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:10:08.593286    5800 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:10:08.596569    5800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ca2408f549496f3ad297fc74dac2ad454d434c33b9752ddeca335a6d61454792'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9fd1c3e9cc892d8900eb5c0134164c13041e26fb675cc49ea80f94d3a435ccf1'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df40e581f804b83a97c9f1367f069a66625d207362cbbf6d2839c03d1d1fbbe5'"
	Jul 31 22:15:58 functional-457100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '483090e067cd7364e32f857ace1297771f5d0ade8b89115c0747419d7081a6fc'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5138db35a08931a1ee38b815b88feb228156d0311e11be3ed102ef7743579d06'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd876a547bfa7ac774850d7a4de640e550c4233f4156cf7114bb2894882c48c24'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9cc28c900527eefc76968db06d5e2c78522404ee70f1ad3699e2356e93b25824'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bf84eae8f955a70f42ac62ea04163987420b596007f66a9c32782690e95017c6'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '177da3b0c28ede90de554568e6e9e39fd73fe1a4a390052166daea4b95706705'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error getting RW layer size for container ID '0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0903f5535e8c2beb5a423646527147c39d92d3f6110b71d7c7750a8999f07935'"
	Jul 31 22:15:58 functional-457100 cri-dockerd[4630]: time="2024-07-31T22:15:58Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 31 22:15:58 functional-457100 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jul 31 22:15:58 functional-457100 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 22:15:58 functional-457100 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-31T22:16:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 21:53] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 21:54] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[  +0.619127] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +0.228265] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.262951] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +5.286858] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.030492] systemd-fstab-generator[4579]: Ignoring "noauto" option for root device
	[  +0.188751] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +0.206776] systemd-fstab-generator[4603]: Ignoring "noauto" option for root device
	[  +0.260917] systemd-fstab-generator[4618]: Ignoring "noauto" option for root device
	[  +0.888960] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.868668] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.066299] systemd-fstab-generator[5471]: Ignoring "noauto" option for root device
	[Jul31 21:55] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.064901] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.363300] systemd-fstab-generator[6438]: Ignoring "noauto" option for root device
	[Jul31 21:58] systemd-fstab-generator[8096]: Ignoring "noauto" option for root device
	[  +0.177086] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.477885] systemd-fstab-generator[8132]: Ignoring "noauto" option for root device
	[  +0.260145] systemd-fstab-generator[8145]: Ignoring "noauto" option for root device
	[  +0.286174] systemd-fstab-generator[8159]: Ignoring "noauto" option for root device
	[  +5.318283] kauditd_printk_skb: 89 callbacks suppressed
	[Jul31 22:16] systemd-fstab-generator[13880]: Ignoring "noauto" option for root device
	[ +38.866981] systemd-fstab-generator[14030]: Ignoring "noauto" option for root device
	[  +0.152158] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:16:59 up 26 min,  0 users,  load average: 0.01, 0.02, 0.05
	Linux functional-457100 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 22:16:56 functional-457100 kubelet[5478]: E0731 22:16:56.274148    5478 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.17.30.24:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-457100.17e76b1745d26196  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-457100,UID:38f507eb43fb4e3716aa01cd3d32cec7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://127.0.0.1:2381/health?exclude=NOSPACE&serializable=true\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-457100,},FirstTimestamp:2024-07-31 21:58:50.19233935 +0000 UTC m=+231.317455561,LastTimestamp:2024-07-31 21:58:50.19233935 +0000 UTC m=+231.317455561,C
ount:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-457100,}"
	Jul 31 22:16:56 functional-457100 kubelet[5478]: E0731 22:16:56.585266    5478 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 18m14.009259134s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962164    5478 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962300    5478 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: I0731 22:16:58.962565    5478 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962747    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962783    5478 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.962881    5478 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963370    5478 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963418    5478 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963442    5478 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963458    5478 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.963517    5478 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.967301    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.968106    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.971300    5478 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.971612    5478 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 31 22:16:58 functional-457100 kubelet[5478]: E0731 22:16:58.973963    5478 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 31 22:16:59 functional-457100 kubelet[5478]: I0731 22:16:59.085617    5478 status_manager.go:853] "Failed to get status for pod" podUID="b1f21da6d6d77b6662df523b7b4dbe14" pod="kube-system/kube-apiserver-functional-457100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-457100\": dial tcp 172.17.30.24:8441: connect: connection refused"
	Jul 31 22:16:59 functional-457100 kubelet[5478]: E0731 22:16:59.154856    5478 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:16:59 functional-457100 kubelet[5478]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:16:59 functional-457100 kubelet[5478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:16:59 functional-457100 kubelet[5478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:16:59 functional-457100 kubelet[5478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:16:59 functional-457100 kubelet[5478]: E0731 22:16:59.349777    5478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-457100?timeout=10s\": dial tcp 172.17.30.24:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:14:24.586775    6548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0731 22:14:58.323135    6548 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:14:58.385047    6548 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:14:58.440030    6548 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.623709    6548 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.671037    6548 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.715776    6548 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.754777    6548 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0731 22:15:58.798781    6548 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-457100 -n functional-457100: exit status 2 (12.5010364s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:16:59.695104   13076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-457100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (181.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-457100 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1439: (dbg) Non-zero exit: kubectl --context functional-457100 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.2137649s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.17.30.24:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-457100 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 service list: exit status 103 (8.5823277s)

                                                
                                                
-- stdout --
	* The control-plane node functional-457100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-457100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:03.888729   12268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1461: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-457100 service list" : exit status 103
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-457100 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-457100\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (7.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 service list -o json: exit status 103 (7.8019272s)

                                                
                                                
-- stdout --
	* The control-plane node functional-457100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-457100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:12.452919   12244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1491: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-457100 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (7.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (7.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 service --namespace=default --https --url hello-node: exit status 103 (7.6461667s)

                                                
                                                
-- stdout --
	* The control-plane node functional-457100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-457100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:20.262379   10052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1511: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-457100 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (7.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (7.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 service hello-node --url --format={{.IP}}: exit status 103 (7.6300746s)

                                                
                                                
-- stdout --
	* The control-plane node functional-457100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-457100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:27.899533    7312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-457100 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1548: "* The control-plane node functional-457100 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-457100\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (7.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (7.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 service hello-node --url: exit status 103 (7.4972204s)

                                                
                                                
-- stdout --
	* The control-plane node functional-457100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-457100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:35.527506   12868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1561: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-457100 service hello-node --url": exit status 103
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-457100 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-457100"
functional_test.go:1569: failed to parse "* The control-plane node functional-457100 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-457100\"": parse "* The control-plane node functional-457100 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-457100\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (7.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: W0731 22:10:54.375120    8516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0731 22:10:54.475443    8516 out.go:291] Setting OutFile to fd 1324 ...
I0731 22:10:54.487920    8516 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:10:54.487920    8516 out.go:304] Setting ErrFile to fd 1328...
I0731 22:10:54.487920    8516 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:10:54.504343    8516 mustload.go:65] Loading cluster: functional-457100
I0731 22:10:54.505465    8516 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:10:54.506351    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:10:57.105517    8516 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:10:57.105623    8516 main.go:141] libmachine: [stderr =====>] : 
I0731 22:10:57.105623    8516 host.go:66] Checking if "functional-457100" exists ...
I0731 22:10:57.106421    8516 api_server.go:166] Checking apiserver status ...
I0731 22:10:57.122434    8516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0731 22:10:57.122434    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:10:59.501842    8516 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:10:59.502040    8516 main.go:141] libmachine: [stderr =====>] : 
I0731 22:10:59.502108    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
I0731 22:11:02.238232    8516 main.go:141] libmachine: [stdout =====>] : 172.17.30.24

                                                
                                                
I0731 22:11:02.238982    8516 main.go:141] libmachine: [stderr =====>] : 
I0731 22:11:02.238982    8516 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
I0731 22:11:02.347372    8516 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.2248722s)
W0731 22:11:02.347372    8516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0731 22:11:02.351992    8516 out.go:177] * The control-plane node functional-457100 apiserver is not running: (state=Stopped)
I0731 22:11:02.354224    8516 out.go:177]   To start a cluster, run: "minikube start -p functional-457100"

                                                
                                                
stdout: * The control-plane node functional-457100 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-457100"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7188: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr] stdout:
* The control-plane node functional-457100 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-457100"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-457100 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-457100 apply -f testdata\testsvc.yaml: exit status 1 (4.208685s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://172.17.30.24:8441/openapi/v2?timeout=32s": dial tcp 172.17.30.24:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-457100 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (59.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls --format short --alsologtostderr: (59.9505176s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-457100 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-457100 image ls --format short --alsologtostderr:
W0731 22:23:01.008250    3508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0731 22:23:01.122528    3508 out.go:291] Setting OutFile to fd 1368 ...
I0731 22:23:01.123107    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:23:01.123107    3508 out.go:304] Setting ErrFile to fd 1392...
I0731 22:23:01.123166    3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:23:01.139490    3508 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:23:01.139729    3508 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:23:01.140786    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:23:03.647666    3508 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:23:03.647798    3508 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:03.666714    3508 ssh_runner.go:195] Run: systemctl --version
I0731 22:23:03.666714    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:23:05.969798    3508 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:23:05.969798    3508 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:05.970102    3508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
I0731 22:23:08.731541    3508 main.go:141] libmachine: [stdout =====>] : 172.17.30.24

                                                
                                                
I0731 22:23:08.731541    3508 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:08.732332    3508 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
I0731 22:23:08.834555    3508 ssh_runner.go:235] Completed: systemctl --version: (5.1677758s)
I0731 22:23:08.843513    3508 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0731 22:24:00.776318    3508 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (51.9320852s)
W0731 22:24:00.776318    3508 cache_images.go:734] Failed to list images for profile functional-457100 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (59.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (60.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls --format table --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls --format table --alsologtostderr: (1m0.2133711s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-457100 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-457100 image ls --format table --alsologtostderr:
W0731 22:24:00.931823   13120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0731 22:24:01.022815   13120 out.go:291] Setting OutFile to fd 1072 ...
I0731 22:24:01.023828   13120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:24:01.023828   13120 out.go:304] Setting ErrFile to fd 1396...
I0731 22:24:01.023828   13120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:24:01.037828   13120 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:24:01.037828   13120 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:24:01.037828   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:24:03.400326   13120 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:24:03.400479   13120 main.go:141] libmachine: [stderr =====>] : 
I0731 22:24:03.413421   13120 ssh_runner.go:195] Run: systemctl --version
I0731 22:24:03.413538   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:24:05.718650   13120 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:24:05.718732   13120 main.go:141] libmachine: [stderr =====>] : 
I0731 22:24:05.718808   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
I0731 22:24:08.405343   13120 main.go:141] libmachine: [stdout =====>] : 172.17.30.24

                                                
                                                
I0731 22:24:08.405454   13120 main.go:141] libmachine: [stderr =====>] : 
I0731 22:24:08.406010   13120 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
I0731 22:24:08.508752   13120 ssh_runner.go:235] Completed: systemctl --version: (5.0951685s)
I0731 22:24:08.519009   13120 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0731 22:25:00.982680   13120 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.4628553s)
W0731 22:25:00.982777   13120 cache_images.go:734] Failed to list images for profile functional-457100 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (60.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (59.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls --format json --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls --format json --alsologtostderr: (59.9262504s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-457100 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-457100 image ls --format json --alsologtostderr:
W0731 22:23:01.009211   10076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0731 22:23:01.123754   10076 out.go:291] Setting OutFile to fd 1428 ...
I0731 22:23:01.146025   10076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:23:01.146048   10076 out.go:304] Setting ErrFile to fd 1376...
I0731 22:23:01.146097   10076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:23:01.164659   10076 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:23:01.164659   10076 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:23:01.165830   10076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:23:03.647666   10076 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:23:03.647666   10076 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:03.666714   10076 ssh_runner.go:195] Run: systemctl --version
I0731 22:23:03.666714   10076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:23:05.970569   10076 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:23:05.970610   10076 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:05.970677   10076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
I0731 22:23:08.755160   10076 main.go:141] libmachine: [stdout =====>] : 172.17.30.24

                                                
                                                
I0731 22:23:08.755869   10076 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:08.756332   10076 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
I0731 22:23:08.849492   10076 ssh_runner.go:235] Completed: systemctl --version: (5.1827126s)
I0731 22:23:08.857806   10076 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0731 22:24:00.775538   10076 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (51.9170743s)
W0731 22:24:00.775704   10076 cache_images.go:734] Failed to list images for profile functional-457100 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (59.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (59.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls --format yaml --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls --format yaml --alsologtostderr: (59.9935177s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-457100 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-457100 image ls --format yaml --alsologtostderr:
W0731 22:23:01.011199    9764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0731 22:23:01.123541    9764 out.go:291] Setting OutFile to fd 1276 ...
I0731 22:23:01.146048    9764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:23:01.146097    9764 out.go:304] Setting ErrFile to fd 1072...
I0731 22:23:01.146097    9764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:23:01.176482    9764 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:23:01.178337    9764 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:23:01.179824    9764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:23:03.648001    9764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:23:03.648001    9764 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:03.666714    9764 ssh_runner.go:195] Run: systemctl --version
I0731 22:23:03.666714    9764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:23:05.969798    9764 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:23:05.970139    9764 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:05.970293    9764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
I0731 22:23:08.778323    9764 main.go:141] libmachine: [stdout =====>] : 172.17.30.24

                                                
                                                
I0731 22:23:08.778917    9764 main.go:141] libmachine: [stderr =====>] : 
I0731 22:23:08.779524    9764 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
I0731 22:23:08.879463    9764 ssh_runner.go:235] Completed: systemctl --version: (5.2126828s)
I0731 22:23:08.887781    9764 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0731 22:24:00.783285    9764 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (51.8947605s)
W0731 22:24:00.783512    9764 cache_images.go:734] Failed to list images for profile functional-457100 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (59.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (120.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 ssh pgrep buildkitd: exit status 1 (9.8908039s)

                                                
                                                
** stderr ** 
	W0731 22:24:00.946814    9584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image build -t localhost/my-image:functional-457100 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image build -t localhost/my-image:functional-457100 testdata\build --alsologtostderr: (50.302211s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-457100 image build -t localhost/my-image:functional-457100 testdata\build --alsologtostderr:
W0731 22:24:10.829998   10180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0731 22:24:10.913941   10180 out.go:291] Setting OutFile to fd 1404 ...
I0731 22:24:10.933267   10180 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:24:10.933267   10180 out.go:304] Setting ErrFile to fd 648...
I0731 22:24:10.933267   10180 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:24:10.951939   10180 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:24:10.969761   10180 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 22:24:10.971056   10180 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:24:13.184504   10180 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:24:13.184565   10180 main.go:141] libmachine: [stderr =====>] : 
I0731 22:24:13.196589   10180 ssh_runner.go:195] Run: systemctl --version
I0731 22:24:13.197150   10180 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-457100 ).state
I0731 22:24:15.344619   10180 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0731 22:24:15.344619   10180 main.go:141] libmachine: [stderr =====>] : 
I0731 22:24:15.344619   10180 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-457100 ).networkadapters[0]).ipaddresses[0]
I0731 22:24:17.914740   10180 main.go:141] libmachine: [stdout =====>] : 172.17.30.24

                                                
                                                
I0731 22:24:17.914740   10180 main.go:141] libmachine: [stderr =====>] : 
I0731 22:24:17.915289   10180 sshutil.go:53] new ssh client: &{IP:172.17.30.24 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-457100\id_rsa Username:docker}
I0731 22:24:18.020194   10180 ssh_runner.go:235] Completed: systemctl --version: (4.8229831s)
I0731 22:24:18.020194   10180 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3459270176.tar
I0731 22:24:18.031581   10180 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 22:24:18.069462   10180 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3459270176.tar
I0731 22:24:18.075460   10180 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3459270176.tar: stat -c "%s %y" /var/lib/minikube/build/build.3459270176.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3459270176.tar': No such file or directory
I0731 22:24:18.075460   10180 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3459270176.tar --> /var/lib/minikube/build/build.3459270176.tar (3072 bytes)
I0731 22:24:18.142476   10180 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3459270176
I0731 22:24:18.174629   10180 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3459270176 -xf /var/lib/minikube/build/build.3459270176.tar
I0731 22:24:18.192835   10180 docker.go:360] Building image: /var/lib/minikube/build/build.3459270176
I0731 22:24:18.202426   10180 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-457100 /var/lib/minikube/build/build.3459270176
ERROR: error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
I0731 22:25:00.983072   10180 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-457100 /var/lib/minikube/build/build.3459270176: (42.7801073s)
W0731 22:25:00.983288   10180 build_images.go:125] Failed to build image for profile functional-457100. make sure the profile is running. Docker build /var/lib/minikube/build/build.3459270176.tar: buildimage docker: docker build -t localhost/my-image:functional-457100 /var/lib/minikube/build/build.3459270176: Process exited with status 1
stdout:

                                                
                                                
stderr:
ERROR: error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
I0731 22:25:00.983327   10180 build_images.go:133] succeeded building to: 
I0731 22:25:00.983366   10180 build_images.go:134] failed building to: functional-457100
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls: (1m0.3289059s)
functional_test.go:446: expected "localhost/my-image:functional-457100" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (120.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (97.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image load --daemon kicbase/echo-server:functional-457100 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image load --daemon kicbase/echo-server:functional-457100 --alsologtostderr: (36.8517367s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls: (1m0.2059033s)
functional_test.go:446: expected "kicbase/echo-server:functional-457100" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (97.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image load --daemon kicbase/echo-server:functional-457100 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image load --daemon kicbase/echo-server:functional-457100 --alsologtostderr: (1m0.2864185s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls: (1m0.2029319s)
functional_test.go:446: expected "kicbase/echo-server:functional-457100" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.0182502s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-457100
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image load --daemon kicbase/echo-server:functional-457100 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image load --daemon kicbase/echo-server:functional-457100 --alsologtostderr: (59.0914191s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls: (1m0.3060667s)
functional_test.go:446: expected "kicbase/echo-server:functional-457100" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.64s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (469.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-457100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-457100"
functional_test.go:499: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-457100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-457100": exit status 1 (7m49.2965858s)

                                                
                                                
** stderr ** 
	W0731 22:16:13.378694    1920 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_DOCKER_SCRIPT: Error generating set output: write /dev/stdout: The pipe is being closed.
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_docker-env_e7a87817879750ae3d8d73c11fc2625d0ca04f2f_9.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	E0731 22:24:00.962929    1920 out.go:190] Fprintf failed: write /dev/stdout: The pipe is being closed.

                                                
                                                
** /stderr **
functional_test.go:502: failed to run the command by deadline. exceeded timeout. powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-457100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-457100"
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (469.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (120.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image save kicbase/echo-server:functional-457100 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image save kicbase/echo-server:functional-457100 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (2m0.5529561s)
functional_test.go:386: expected "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (120.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: exit status 80 (483.9356ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:21:00.047078    6616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0731 22:21:00.139964    6616 out.go:291] Setting OutFile to fd 648 ...
	I0731 22:21:00.158150    6616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:21:00.158783    6616 out.go:304] Setting ErrFile to fd 1376...
	I0731 22:21:00.158837    6616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:21:00.175223    6616 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:21:00.175223    6616 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	I0731 22:21:00.295610    6616 cache.go:107] acquiring lock: {Name:mkf95425a8915dbfb11d7c7d69d8a47644f0157a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:21:00.298994    6616 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" took 123.7699ms
	I0731 22:21:00.303425    6616 out.go:177] 
	W0731 22:21:00.305659    6616 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	W0731 22:21:00.305659    6616 out.go:239] * 
	* 
	W0731 22:21:00.385607    6616 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_logs_51e1df8512fd2a6325994a4f10d950145af53c80_482.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_logs_51e1df8512fd2a6325994a4f10d950145af53c80_482.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 22:21:00.388600    6616 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:411: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:21:00.047078    6616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0731 22:21:00.139964    6616 out.go:291] Setting OutFile to fd 648 ...
	I0731 22:21:00.158150    6616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:21:00.158783    6616 out.go:304] Setting ErrFile to fd 1376...
	I0731 22:21:00.158837    6616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:21:00.175223    6616 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:21:00.175223    6616 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	I0731 22:21:00.295610    6616 cache.go:107] acquiring lock: {Name:mkf95425a8915dbfb11d7c7d69d8a47644f0157a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:21:00.298994    6616 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" took 123.7699ms
	I0731 22:21:00.303425    6616 out.go:177] 
	W0731 22:21:00.305659    6616 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	W0731 22:21:00.305659    6616 out.go:239] * 
	* 
	W0731 22:21:00.385607    6616 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_logs_51e1df8512fd2a6325994a4f10d950145af53c80_482.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_logs_51e1df8512fd2a6325994a4f10d950145af53c80_482.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 22:21:00.388600    6616 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-dmsjq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-dmsjq -- sh -c "ping -c 1 172.17.16.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-dmsjq -- sh -c "ping -c 1 172.17.16.1": exit status 1 (10.5378045s)

                                                
                                                
-- stdout --
	PING 172.17.16.1 (172.17.16.1): 56 data bytes
	
	--- 172.17.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:40:31.548194   10476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.16.1) from pod (busybox-fc5497c4f-dmsjq): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-f8sql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-f8sql -- sh -c "ping -c 1 172.17.16.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-f8sql -- sh -c "ping -c 1 172.17.16.1": exit status 1 (10.512809s)

                                                
                                                
-- stdout --
	PING 172.17.16.1 (172.17.16.1): 56 data bytes
	
	--- 172.17.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:40:42.630101    7704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.16.1) from pod (busybox-fc5497c4f-f8sql): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-x7dnz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-x7dnz -- sh -c "ping -c 1 172.17.16.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-x7dnz -- sh -c "ping -c 1 172.17.16.1": exit status 1 (10.5211304s)

                                                
                                                
-- stdout --
	PING 172.17.16.1 (172.17.16.1): 56 data bytes
	
	--- 172.17.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:40:53.663093    6624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.16.1) from pod (busybox-fc5497c4f-x7dnz): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-207300 -n ha-207300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-207300 -n ha-207300: (12.2658227s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 logs -n 25: (8.8826486s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | functional-457100 ssh pgrep          | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:24 UTC |                     |
	|         | buildkitd                            |                   |                   |         |                     |                     |
	| image   | functional-457100 image build -t     | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:24 UTC | 31 Jul 24 22:25 UTC |
	|         | localhost/my-image:functional-457100 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-457100 image ls           | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:25 UTC | 31 Jul 24 22:26 UTC |
	| delete  | -p functional-457100                 | functional-457100 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:27 UTC | 31 Jul 24 22:28 UTC |
	| start   | -p ha-207300 --wait=true             | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:28 UTC | 31 Jul 24 22:39 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- apply -f             | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- rollout status       | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- get pods -o          | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- get pods -o          | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-dmsjq --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-f8sql --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-x7dnz --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-dmsjq --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-f8sql --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-x7dnz --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-dmsjq -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-f8sql -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-x7dnz -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- get pods -o          | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-dmsjq              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC |                     |
	|         | busybox-fc5497c4f-dmsjq -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.16.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-f8sql              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC |                     |
	|         | busybox-fc5497c4f-f8sql -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.16.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC | 31 Jul 24 22:40 UTC |
	|         | busybox-fc5497c4f-x7dnz              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-207300 -- exec                 | ha-207300         | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:40 UTC |                     |
	|         | busybox-fc5497c4f-x7dnz -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.16.1             |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:28:20
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:28:20.394898    9488 out.go:291] Setting OutFile to fd 1524 ...
	I0731 22:28:20.395358    9488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:28:20.395436    9488 out.go:304] Setting ErrFile to fd 1528...
	I0731 22:28:20.395512    9488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:28:20.416409    9488 out.go:298] Setting JSON to false
	I0731 22:28:20.418993    9488 start.go:129] hostinfo: {"hostname":"minikube6","uptime":540842,"bootTime":1721924058,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:28:20.418993    9488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:28:20.427763    9488 out.go:177] * [ha-207300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:28:20.438483    9488 notify.go:220] Checking for updates...
	I0731 22:28:20.438871    9488 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:28:20.441706    9488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:28:20.444315    9488 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:28:20.447232    9488 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:28:20.449251    9488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:28:20.453233    9488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:28:25.468913    9488 out.go:177] * Using the hyperv driver based on user configuration
	I0731 22:28:25.472010    9488 start.go:297] selected driver: hyperv
	I0731 22:28:25.472010    9488 start.go:901] validating driver "hyperv" against <nil>
	I0731 22:28:25.472010    9488 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:28:25.519913    9488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:28:25.520754    9488 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:28:25.520754    9488 cni.go:84] Creating CNI manager for ""
	I0731 22:28:25.520754    9488 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 22:28:25.520754    9488 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:28:25.521744    9488 start.go:340] cluster config:
	{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:28:25.521857    9488 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:28:25.527878    9488 out.go:177] * Starting "ha-207300" primary control-plane node in "ha-207300" cluster
	I0731 22:28:25.530614    9488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 22:28:25.530614    9488 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 22:28:25.530614    9488 cache.go:56] Caching tarball of preloaded images
	I0731 22:28:25.531324    9488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 22:28:25.531324    9488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 22:28:25.531985    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:28:25.531985    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json: {Name:mk44506ea483dff1f2f73c4d37ad7611d3f92c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:28:25.533281    9488 start.go:360] acquireMachinesLock for ha-207300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:28:25.533281    9488 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-207300"
	I0731 22:28:25.533281    9488 start.go:93] Provisioning new machine with config: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:28:25.533281    9488 start.go:125] createHost starting for "" (driver="hyperv")
	I0731 22:28:25.537219    9488 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:28:25.538220    9488 start.go:159] libmachine.API.Create for "ha-207300" (driver="hyperv")
	I0731 22:28:25.538220    9488 client.go:168] LocalClient.Create starting
	I0731 22:28:25.538220    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 22:28:25.538220    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:28:25.539217    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:28:25.539217    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 22:28:25.539217    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:28:25.539217    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:28:25.539217    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 22:28:27.453572    9488 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 22:28:27.453572    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:27.454245    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 22:28:29.054801    9488 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 22:28:29.054801    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:29.054889    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:28:30.482751    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:28:30.483674    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:30.483779    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:28:33.815521    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:28:33.816023    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:33.818487    9488 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:28:34.307608    9488 main.go:141] libmachine: Creating SSH key...
	I0731 22:28:34.597133    9488 main.go:141] libmachine: Creating VM...
	I0731 22:28:34.597133    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:28:37.245341    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:28:37.245837    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:37.245837    9488 main.go:141] libmachine: Using switch "Default Switch"
	I0731 22:28:37.245997    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:28:38.900949    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:28:38.901177    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:38.901177    9488 main.go:141] libmachine: Creating VHD
	I0731 22:28:38.901177    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 22:28:42.503479    9488 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 47418C26-BE3D-45A3-8E9D-DC120EE42026
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 22:28:42.504582    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:42.504582    9488 main.go:141] libmachine: Writing magic tar header
	I0731 22:28:42.504676    9488 main.go:141] libmachine: Writing SSH key tar header
	I0731 22:28:42.515468    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 22:28:45.544097    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:45.544625    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:45.544625    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\disk.vhd' -SizeBytes 20000MB
	I0731 22:28:47.960288    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:47.960288    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:47.960288    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-207300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0731 22:28:51.444071    9488 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-207300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 22:28:51.444260    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:51.444347    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-207300 -DynamicMemoryEnabled $false
	I0731 22:28:53.544413    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:53.545444    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:53.545488    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-207300 -Count 2
	I0731 22:28:55.628110    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:55.628110    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:55.628110    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-207300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\boot2docker.iso'
	I0731 22:28:58.068995    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:58.068995    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:58.069065    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-207300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\disk.vhd'
	I0731 22:29:00.612970    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:00.612970    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:00.612970    9488 main.go:141] libmachine: Starting VM...
	I0731 22:29:00.612970    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-207300
	I0731 22:29:03.633050    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:03.633050    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:03.633877    9488 main.go:141] libmachine: Waiting for host to start...
	I0731 22:29:03.633877    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:05.962295    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:05.963066    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:05.963066    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:08.496443    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:08.496443    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:09.503400    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:11.776457    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:11.776625    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:11.776735    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:14.338679    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:14.338679    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:15.348687    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:17.596512    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:17.596570    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:17.596570    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:20.102842    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:20.102842    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:21.108566    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:23.270573    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:23.270573    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:23.271015    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:25.726644    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:25.726644    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:26.730075    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:28.980714    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:28.980714    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:28.981474    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:31.465023    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:31.465023    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:31.465613    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:33.520265    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:33.520265    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:33.520371    9488 machine.go:94] provisionDockerMachine start ...
	I0731 22:29:33.520538    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:35.613964    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:35.614256    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:35.614256    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:38.000547    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:38.001517    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:38.006578    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:29:38.017254    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:29:38.017254    9488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:29:38.156576    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 22:29:38.156665    9488 buildroot.go:166] provisioning hostname "ha-207300"
	I0731 22:29:38.156665    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:40.203975    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:40.205205    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:40.205333    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:42.586902    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:42.587461    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:42.592840    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:29:42.593030    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:29:42.593030    9488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-207300 && echo "ha-207300" | sudo tee /etc/hostname
	I0731 22:29:42.742946    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-207300
	
	I0731 22:29:42.742946    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:44.758498    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:44.758770    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:44.758834    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:47.174994    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:47.174994    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:47.180447    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:29:47.181077    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:29:47.181077    9488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-207300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-207300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-207300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:29:47.332698    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:29:47.332698    9488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 22:29:47.332698    9488 buildroot.go:174] setting up certificates
	I0731 22:29:47.332698    9488 provision.go:84] configureAuth start
	I0731 22:29:47.332698    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:49.353848    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:49.354728    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:49.354728    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:51.792088    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:51.793009    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:51.793009    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:53.852024    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:53.852424    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:53.852424    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:56.274044    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:56.274044    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:56.274044    9488 provision.go:143] copyHostCerts
	I0731 22:29:56.274044    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 22:29:56.274044    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 22:29:56.274044    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 22:29:56.274641    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 22:29:56.275860    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 22:29:56.275860    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 22:29:56.275860    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 22:29:56.276639    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 22:29:56.277764    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 22:29:56.278065    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 22:29:56.278158    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 22:29:56.278531    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 22:29:56.279227    9488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-207300 san=[127.0.0.1 172.17.21.92 ha-207300 localhost minikube]
	I0731 22:29:56.550278    9488 provision.go:177] copyRemoteCerts
	I0731 22:29:56.561199    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:29:56.561199    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:58.625652    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:58.625983    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:58.625983    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:01.078020    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:01.079180    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:01.079716    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:30:01.182473    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6212155s)
	I0731 22:30:01.182473    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 22:30:01.183091    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0731 22:30:01.226479    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 22:30:01.226479    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 22:30:01.271843    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 22:30:01.272343    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 22:30:01.322908    9488 provision.go:87] duration metric: took 13.9900327s to configureAuth
	I0731 22:30:01.322908    9488 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:30:01.323787    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:30:01.323787    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:03.463055    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:03.463055    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:03.463055    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:05.980716    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:05.980838    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:05.986686    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:05.987229    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:05.987229    9488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 22:30:06.116413    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 22:30:06.116413    9488 buildroot.go:70] root file system type: tmpfs
	I0731 22:30:06.116413    9488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 22:30:06.116413    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:08.251673    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:08.252344    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:08.252475    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:10.717528    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:10.718261    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:10.722884    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:10.723757    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:10.723757    9488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 22:30:10.878778    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 22:30:10.878920    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:12.958022    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:12.958022    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:12.958249    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:15.639510    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:15.639510    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:15.644748    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:15.645592    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:15.645592    9488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 22:30:17.800096    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 22:30:17.800096    9488 machine.go:97] duration metric: took 44.2791645s to provisionDockerMachine
	I0731 22:30:17.800096    9488 client.go:171] duration metric: took 1m52.2604589s to LocalClient.Create
	I0731 22:30:17.800096    9488 start.go:167] duration metric: took 1m52.2604589s to libmachine.API.Create "ha-207300"
	I0731 22:30:17.800096    9488 start.go:293] postStartSetup for "ha-207300" (driver="hyperv")
	I0731 22:30:17.800096    9488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:30:17.812161    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:30:17.812161    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:19.871610    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:19.871842    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:19.871842    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:22.279632    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:22.280702    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:22.280998    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:30:22.388137    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5758811s)
	I0731 22:30:22.400208    9488 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:30:22.407127    9488 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:30:22.407238    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 22:30:22.407788    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 22:30:22.408772    9488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 22:30:22.408841    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 22:30:22.419466    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:30:22.435985    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 22:30:22.479671    9488 start.go:296] duration metric: took 4.679515s for postStartSetup
	I0731 22:30:22.482502    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:24.480653    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:24.480954    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:24.481028    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:26.865667    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:26.865667    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:26.866710    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:30:26.869702    9488 start.go:128] duration metric: took 2m1.3348894s to createHost
	I0731 22:30:26.869784    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:28.872242    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:28.872242    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:28.872675    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:31.347884    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:31.348665    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:31.353944    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:31.354521    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:31.354713    9488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:30:31.494046    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465031.514556384
	
	I0731 22:30:31.494046    9488 fix.go:216] guest clock: 1722465031.514556384
	I0731 22:30:31.494046    9488 fix.go:229] Guest: 2024-07-31 22:30:31.514556384 +0000 UTC Remote: 2024-07-31 22:30:26.8697028 +0000 UTC m=+126.624504601 (delta=4.644853584s)
	I0731 22:30:31.494046    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:33.532157    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:33.532157    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:33.532529    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:35.904058    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:35.904058    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:35.910721    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:35.911109    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:35.911109    9488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722465031
	I0731 22:30:36.061294    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 22:30:31 UTC 2024
	
	I0731 22:30:36.061294    9488 fix.go:236] clock set: Wed Jul 31 22:30:31 UTC 2024
	 (err=<nil>)
	I0731 22:30:36.061294    9488 start.go:83] releasing machines lock for "ha-207300", held for 2m10.5263644s
	I0731 22:30:36.061294    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:38.053349    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:38.053349    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:38.054295    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:40.423306    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:40.423438    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:40.427579    9488 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 22:30:40.427758    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:40.438024    9488 ssh_runner.go:195] Run: cat /version.json
	I0731 22:30:40.438167    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:42.592019    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:42.592019    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:42.592669    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:42.592881    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:42.592881    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:42.592997    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:45.188078    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:45.188391    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:45.188783    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:30:45.209111    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:45.209170    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:45.209531    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:30:45.283141    9488 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8554199s)
	W0731 22:30:45.283280    9488 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 22:30:45.300329    9488 ssh_runner.go:235] Completed: cat /version.json: (4.8622061s)
	I0731 22:30:45.313952    9488 ssh_runner.go:195] Run: systemctl --version
	I0731 22:30:45.333719    9488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:30:45.341629    9488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:30:45.353119    9488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:30:45.382683    9488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:30:45.382762    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:30:45.382839    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0731 22:30:45.393208    9488 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 22:30:45.393208    9488 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 22:30:45.428666    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 22:30:45.457117    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 22:30:45.474944    9488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 22:30:45.485936    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 22:30:45.515754    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:30:45.546839    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 22:30:45.575005    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:30:45.604385    9488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:30:45.633769    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 22:30:45.664139    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 22:30:45.692973    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 22:30:45.721365    9488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:30:45.752204    9488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:30:45.784307    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:30:45.966448    9488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 22:30:46.000194    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:30:46.012584    9488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 22:30:46.052332    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:30:46.091711    9488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:30:46.136343    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:30:46.170522    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:30:46.203683    9488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 22:30:46.260753    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:30:46.280455    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:30:46.321218    9488 ssh_runner.go:195] Run: which cri-dockerd
	I0731 22:30:46.338019    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 22:30:46.353721    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 22:30:46.399106    9488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 22:30:46.572355    9488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 22:30:46.728599    9488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 22:30:46.728697    9488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 22:30:46.772433    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:30:46.974151    9488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 22:30:49.509568    9488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5353244s)
	I0731 22:30:49.521135    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 22:30:49.553097    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:30:49.585108    9488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 22:30:49.775682    9488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 22:30:49.944903    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:30:50.135217    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 22:30:50.173132    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:30:50.204743    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:30:50.424599    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 22:30:50.523208    9488 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 22:30:50.533978    9488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 22:30:50.543018    9488 start.go:563] Will wait 60s for crictl version
	I0731 22:30:50.554619    9488 ssh_runner.go:195] Run: which crictl
	I0731 22:30:50.571122    9488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:30:50.622569    9488 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 22:30:50.632398    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:30:50.672114    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:30:50.701078    9488 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 22:30:50.701078    9488 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 22:30:50.706085    9488 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 22:30:50.706085    9488 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 22:30:50.706085    9488 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 22:30:50.706085    9488 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 22:30:50.709070    9488 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 22:30:50.709070    9488 ip.go:210] interface addr: 172.17.16.1/20
	I0731 22:30:50.719070    9488 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 22:30:50.725463    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:30:50.757660    9488 kubeadm.go:883] updating cluster {Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 22:30:50.757660    9488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 22:30:50.767059    9488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 22:30:50.795754    9488 docker.go:685] Got preloaded images: 
	I0731 22:30:50.795754    9488 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0731 22:30:50.808750    9488 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 22:30:50.833087    9488 ssh_runner.go:195] Run: which lz4
	I0731 22:30:50.838550    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 22:30:50.849237    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 22:30:50.855487    9488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 22:30:50.855487    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0731 22:30:52.347539    9488 docker.go:649] duration metric: took 1.5089697s to copy over tarball
	I0731 22:30:52.359557    9488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 22:31:01.117509    9488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7577834s)
	I0731 22:31:01.117509    9488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 22:31:01.179265    9488 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 22:31:01.196219    9488 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0731 22:31:01.236877    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:31:01.426306    9488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 22:31:04.740562    9488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3141708s)
	I0731 22:31:04.749824    9488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 22:31:04.783559    9488 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 22:31:04.783624    9488 cache_images.go:84] Images are preloaded, skipping loading
	I0731 22:31:04.783707    9488 kubeadm.go:934] updating node { 172.17.21.92 8443 v1.30.3 docker true true} ...
	I0731 22:31:04.783974    9488 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-207300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.21.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:31:04.793268    9488 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 22:31:04.858907    9488 cni.go:84] Creating CNI manager for ""
	I0731 22:31:04.858907    9488 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 22:31:04.858907    9488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 22:31:04.858907    9488 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.21.92 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-207300 NodeName:ha-207300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.21.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.21.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 22:31:04.859497    9488 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.21.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-207300"
	  kubeletExtraArgs:
	    node-ip: 172.17.21.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.21.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 22:31:04.859606    9488 kube-vip.go:115] generating kube-vip config ...
	I0731 22:31:04.872143    9488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:31:04.900301    9488 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:31:04.900469    9488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:31:04.912499    9488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:31:04.931663    9488 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 22:31:04.943927    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 22:31:04.961730    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0731 22:31:04.988322    9488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:31:05.015148    9488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0731 22:31:05.041139    9488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0731 22:31:05.081113    9488 ssh_runner.go:195] Run: grep 172.17.31.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:31:05.087227    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:31:05.114969    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:31:05.303405    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:31:05.335090    9488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300 for IP: 172.17.21.92
	I0731 22:31:05.335140    9488 certs.go:194] generating shared ca certs ...
	I0731 22:31:05.335140    9488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:05.335942    9488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 22:31:05.336494    9488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 22:31:05.336686    9488 certs.go:256] generating profile certs ...
	I0731 22:31:05.337485    9488 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key
	I0731 22:31:05.337553    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.crt with IP's: []
	I0731 22:31:05.848622    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.crt ...
	I0731 22:31:05.848622    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.crt: {Name:mk18891580ce23bacd68b0f7aef728a6870066fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:05.850535    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key ...
	I0731 22:31:05.850535    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key: {Name:mk75223d6c3518ef73cc5bc219634d912f36568b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:05.851544    9488 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.bdfdcf79
	I0731 22:31:05.851544    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.bdfdcf79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.21.92 172.17.31.254]
	I0731 22:31:06.204999    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.bdfdcf79 ...
	I0731 22:31:06.204999    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.bdfdcf79: {Name:mk8e736c4551099b4a6b3f35f2ed10d6cbb51124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:06.206035    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.bdfdcf79 ...
	I0731 22:31:06.206035    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.bdfdcf79: {Name:mkd4b5bba321ebbd39df52b27c1c33413da1a8c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:06.207028    9488 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.bdfdcf79 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt
	I0731 22:31:06.220043    9488 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.bdfdcf79 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key
	I0731 22:31:06.221020    9488 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key
	I0731 22:31:06.221697    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt with IP's: []
	I0731 22:31:06.440827    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt ...
	I0731 22:31:06.440827    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt: {Name:mkf923965ffb62fe1b9ad3347bfbede9812f45a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:06.441828    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key ...
	I0731 22:31:06.441828    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key: {Name:mk02bb7795dea1704912df99fedf92def1afb132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:06.443500    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:31:06.444010    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:31:06.444118    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:31:06.444118    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:31:06.444118    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:31:06.444118    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:31:06.444703    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:31:06.452699    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:31:06.453710    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 22:31:06.453710    9488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 22:31:06.453710    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 22:31:06.453710    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 22:31:06.454720    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 22:31:06.454720    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 22:31:06.454720    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 22:31:06.455781    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 22:31:06.455781    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:31:06.455781    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 22:31:06.456700    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:31:06.494569    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:31:06.535968    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:31:06.575958    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 22:31:06.614203    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 22:31:06.659166    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 22:31:06.699085    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:31:06.745991    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:31:06.788748    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 22:31:06.831627    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:31:06.876359    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 22:31:06.916726    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 22:31:06.959329    9488 ssh_runner.go:195] Run: openssl version
	I0731 22:31:06.979683    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 22:31:07.012283    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 22:31:07.019829    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 22:31:07.031106    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 22:31:07.060520    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:31:07.089991    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:31:07.119935    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:31:07.127930    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:31:07.139217    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:31:07.157369    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:31:07.185396    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 22:31:07.212372    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 22:31:07.218739    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 22:31:07.228839    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 22:31:07.248953    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 22:31:07.278097    9488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:31:07.284810    9488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:31:07.285105    9488 kubeadm.go:392] StartCluster: {Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:31:07.292998    9488 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 22:31:07.330731    9488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 22:31:07.363738    9488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 22:31:07.391733    9488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 22:31:07.408158    9488 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 22:31:07.408158    9488 kubeadm.go:157] found existing configuration files:
	
	I0731 22:31:07.423277    9488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 22:31:07.439898    9488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 22:31:07.449945    9488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 22:31:07.478872    9488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 22:31:07.494349    9488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 22:31:07.510175    9488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 22:31:07.539391    9488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 22:31:07.558062    9488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 22:31:07.570065    9488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 22:31:07.597086    9488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 22:31:07.612594    9488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 22:31:07.623493    9488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 22:31:07.639148    9488 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 22:31:08.058482    9488 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 22:31:22.132529    9488 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 22:31:22.132529    9488 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 22:31:22.132529    9488 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 22:31:22.132529    9488 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 22:31:22.133509    9488 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 22:31:22.133509    9488 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 22:31:22.137503    9488 out.go:204]   - Generating certificates and keys ...
	I0731 22:31:22.137503    9488 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 22:31:22.137503    9488 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 22:31:22.137503    9488 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-207300 localhost] and IPs [172.17.21.92 127.0.0.1 ::1]
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 22:31:22.139533    9488 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-207300 localhost] and IPs [172.17.21.92 127.0.0.1 ::1]
	I0731 22:31:22.139533    9488 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 22:31:22.139533    9488 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 22:31:22.139533    9488 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 22:31:22.139533    9488 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 22:31:22.139533    9488 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 22:31:22.139533    9488 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 22:31:22.140545    9488 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 22:31:22.140545    9488 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 22:31:22.140545    9488 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 22:31:22.140545    9488 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 22:31:22.140545    9488 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 22:31:22.146503    9488 out.go:204]   - Booting up control plane ...
	I0731 22:31:22.146503    9488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 22:31:22.146503    9488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 22:31:22.146503    9488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 22:31:22.147540    9488 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 22:31:22.147540    9488 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 22:31:22.147540    9488 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 22:31:22.147540    9488 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 22:31:22.148502    9488 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 22:31:22.148502    9488 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003493206s
	I0731 22:31:22.148502    9488 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 22:31:22.148502    9488 kubeadm.go:310] [api-check] The API server is healthy after 7.002058178s
	I0731 22:31:22.148502    9488 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 22:31:22.148502    9488 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 22:31:22.149510    9488 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 22:31:22.149510    9488 kubeadm.go:310] [mark-control-plane] Marking the node ha-207300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 22:31:22.149510    9488 kubeadm.go:310] [bootstrap-token] Using token: 3zaf11.kkfeag4mao0twvx0
	I0731 22:31:22.152500    9488 out.go:204]   - Configuring RBAC rules ...
	I0731 22:31:22.153501    9488 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 22:31:22.153501    9488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 22:31:22.153501    9488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 22:31:22.153501    9488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 22:31:22.154513    9488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 22:31:22.154513    9488 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 22:31:22.154513    9488 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 22:31:22.154513    9488 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 22:31:22.154513    9488 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 22:31:22.154513    9488 kubeadm.go:310] 
	I0731 22:31:22.154513    9488 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 22:31:22.154513    9488 kubeadm.go:310] 
	I0731 22:31:22.155512    9488 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 22:31:22.155512    9488 kubeadm.go:310] 
	I0731 22:31:22.155512    9488 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 22:31:22.155512    9488 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 22:31:22.155512    9488 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 22:31:22.155512    9488 kubeadm.go:310] 
	I0731 22:31:22.155512    9488 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 22:31:22.155512    9488 kubeadm.go:310] 
	I0731 22:31:22.155512    9488 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 22:31:22.155512    9488 kubeadm.go:310] 
	I0731 22:31:22.156509    9488 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 22:31:22.156509    9488 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 22:31:22.156509    9488 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 22:31:22.156509    9488 kubeadm.go:310] 
	I0731 22:31:22.156509    9488 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 22:31:22.156509    9488 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 22:31:22.156509    9488 kubeadm.go:310] 
	I0731 22:31:22.157506    9488 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3zaf11.kkfeag4mao0twvx0 \
	I0731 22:31:22.157506    9488 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf \
	I0731 22:31:22.157506    9488 kubeadm.go:310] 	--control-plane 
	I0731 22:31:22.157506    9488 kubeadm.go:310] 
	I0731 22:31:22.157506    9488 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 22:31:22.157506    9488 kubeadm.go:310] 
	I0731 22:31:22.157506    9488 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3zaf11.kkfeag4mao0twvx0 \
	I0731 22:31:22.157506    9488 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf 
	I0731 22:31:22.158513    9488 cni.go:84] Creating CNI manager for ""
	I0731 22:31:22.158513    9488 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 22:31:22.162503    9488 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 22:31:22.177505    9488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 22:31:22.186174    9488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 22:31:22.186174    9488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 22:31:22.233642    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 22:31:22.869001    9488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 22:31:22.882532    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:22.885157    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-207300 minikube.k8s.io/updated_at=2024_07_31T22_31_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-207300 minikube.k8s.io/primary=true
	I0731 22:31:22.909123    9488 ops.go:34] apiserver oom_adj: -16
	I0731 22:31:23.093292    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:23.601389    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:24.108247    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:24.595341    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:25.096733    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:25.598273    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:26.097963    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:26.599824    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:27.102146    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:27.609592    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:28.109528    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:28.596168    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:29.099763    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:29.601088    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:30.103011    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:30.604590    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:31.109646    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:31.595619    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:32.099737    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:32.600421    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:33.109257    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:33.608656    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:34.099608    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:34.604087    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:35.098013    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:35.604303    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:35.716632    9488 kubeadm.go:1113] duration metric: took 12.8474108s to wait for elevateKubeSystemPrivileges
	I0731 22:31:35.716632    9488 kubeadm.go:394] duration metric: took 28.4311651s to StartCluster
	I0731 22:31:35.716632    9488 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:35.716632    9488 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:31:35.718737    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:35.719594    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 22:31:35.719594    9488 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:31:35.719594    9488 start.go:241] waiting for startup goroutines ...
	I0731 22:31:35.719594    9488 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 22:31:35.719594    9488 addons.go:69] Setting storage-provisioner=true in profile "ha-207300"
	I0731 22:31:35.719594    9488 addons.go:234] Setting addon storage-provisioner=true in "ha-207300"
	I0731 22:31:35.720580    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:31:35.720580    9488 addons.go:69] Setting default-storageclass=true in profile "ha-207300"
	I0731 22:31:35.720580    9488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-207300"
	I0731 22:31:35.720580    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:31:35.720580    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:35.721593    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:35.917386    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 22:31:36.279539    9488 start.go:971] {"host.minikube.internal": 172.17.16.1} host record injected into CoreDNS's ConfigMap
	I0731 22:31:37.993009    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:37.993219    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:37.993298    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:37.993298    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:37.994649    9488 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:31:37.995691    9488 kapi.go:59] client config for ha-207300: &rest.Config{Host:"https://172.17.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 22:31:37.996130    9488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 22:31:37.997094    9488 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 22:31:37.998092    9488 addons.go:234] Setting addon default-storageclass=true in "ha-207300"
	I0731 22:31:37.998092    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:31:37.999124    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:37.999124    9488 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:31:37.999124    9488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 22:31:37.999124    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:40.261168    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:40.261836    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:40.261836    9488 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 22:31:40.261836    9488 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 22:31:40.261836    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:40.397591    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:40.397934    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:40.398146    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:31:42.490146    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:42.490146    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:42.490268    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:31:43.067095    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:31:43.068098    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:43.068098    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:31:43.206158    9488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:31:45.012766    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:31:45.012766    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:45.014005    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:31:45.149489    9488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 22:31:45.284245    9488 round_trippers.go:463] GET https://172.17.31.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 22:31:45.285255    9488 round_trippers.go:469] Request Headers:
	I0731 22:31:45.285255    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:31:45.285255    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:31:45.297313    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:31:45.298884    9488 round_trippers.go:463] PUT https://172.17.31.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 22:31:45.298929    9488 round_trippers.go:469] Request Headers:
	I0731 22:31:45.298929    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:31:45.298929    9488 round_trippers.go:473]     Content-Type: application/json
	I0731 22:31:45.298929    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:31:45.302936    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:31:45.306766    9488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 22:31:45.311289    9488 addons.go:510] duration metric: took 9.5915727s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 22:31:45.311289    9488 start.go:246] waiting for cluster config update ...
	I0731 22:31:45.311289    9488 start.go:255] writing updated cluster config ...
	I0731 22:31:45.314730    9488 out.go:177] 
	I0731 22:31:45.325998    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:31:45.325998    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:31:45.331016    9488 out.go:177] * Starting "ha-207300-m02" control-plane node in "ha-207300" cluster
	I0731 22:31:45.335989    9488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 22:31:45.335989    9488 cache.go:56] Caching tarball of preloaded images
	I0731 22:31:45.335989    9488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 22:31:45.337002    9488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 22:31:45.337002    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:31:45.338996    9488 start.go:360] acquireMachinesLock for ha-207300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:31:45.338996    9488 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-207300-m02"
	I0731 22:31:45.338996    9488 start.go:93] Provisioning new machine with config: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:31:45.338996    9488 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0731 22:31:45.343991    9488 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:31:45.343991    9488 start.go:159] libmachine.API.Create for "ha-207300" (driver="hyperv")
	I0731 22:31:45.343991    9488 client.go:168] LocalClient.Create starting
	I0731 22:31:45.343991    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 22:31:45.343991    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:31:45.343991    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:31:45.344989    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 22:31:45.344989    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:31:45.344989    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:31:45.344989    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 22:31:47.190095    9488 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 22:31:47.190095    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:47.190095    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 22:31:48.857854    9488 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 22:31:48.857854    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:48.858025    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:31:50.374130    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:31:50.374130    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:50.374774    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:31:53.905340    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:31:53.905340    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:53.908508    9488 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:31:54.383288    9488 main.go:141] libmachine: Creating SSH key...
	I0731 22:31:54.678107    9488 main.go:141] libmachine: Creating VM...
	I0731 22:31:54.678107    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:31:57.605842    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:31:57.606730    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:57.606849    9488 main.go:141] libmachine: Using switch "Default Switch"
	I0731 22:31:57.606957    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:31:59.368070    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:31:59.368070    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:59.368070    9488 main.go:141] libmachine: Creating VHD
	I0731 22:31:59.368239    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 22:32:03.099372    9488 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9C3107D0-701E-45FE-9F08-1F10E51140A7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 22:32:03.099473    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:03.099530    9488 main.go:141] libmachine: Writing magic tar header
	I0731 22:32:03.099583    9488 main.go:141] libmachine: Writing SSH key tar header
	I0731 22:32:03.110082    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 22:32:06.305415    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:06.305415    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:06.305415    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\disk.vhd' -SizeBytes 20000MB
	I0731 22:32:08.866083    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:08.866083    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:08.866469    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-207300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0731 22:32:12.519217    9488 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-207300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 22:32:12.519217    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:12.519307    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-207300-m02 -DynamicMemoryEnabled $false
	I0731 22:32:14.724344    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:14.724344    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:14.724344    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-207300-m02 -Count 2
	I0731 22:32:16.866836    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:16.866836    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:16.867008    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-207300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\boot2docker.iso'
	I0731 22:32:19.374171    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:19.375043    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:19.375116    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-207300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\disk.vhd'
	I0731 22:32:22.054654    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:22.054973    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:22.054973    9488 main.go:141] libmachine: Starting VM...
	I0731 22:32:22.054973    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-207300-m02
	I0731 22:32:25.235013    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:25.235013    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:25.235013    9488 main.go:141] libmachine: Waiting for host to start...
	I0731 22:32:25.235216    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:27.555350    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:27.555862    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:27.555912    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:30.064982    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:30.064982    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:31.067429    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:33.262094    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:33.262094    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:33.262094    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:35.785999    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:35.787012    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:36.793716    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:38.977650    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:38.978033    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:38.978136    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:41.536945    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:41.536945    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:42.541192    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:44.731361    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:44.731361    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:44.731477    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:47.277122    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:47.277122    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:48.293756    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:50.584563    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:50.585590    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:50.585642    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:53.122743    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:32:53.123002    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:53.123002    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:55.194673    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:55.194673    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:55.195623    9488 machine.go:94] provisionDockerMachine start ...
	I0731 22:32:55.195764    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:57.312221    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:57.312396    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:57.312616    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:59.797824    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:32:59.798840    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:59.803996    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:32:59.814903    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:32:59.814903    9488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:32:59.942650    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 22:32:59.942713    9488 buildroot.go:166] provisioning hostname "ha-207300-m02"
	I0731 22:32:59.942823    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:02.069447    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:02.069447    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:02.069708    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:04.609475    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:04.609915    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:04.617633    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:04.618472    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:04.618636    9488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-207300-m02 && echo "ha-207300-m02" | sudo tee /etc/hostname
	I0731 22:33:04.766635    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-207300-m02
	
	I0731 22:33:04.766635    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:06.857594    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:06.858601    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:06.858601    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:09.399613    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:09.400657    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:09.405618    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:09.406585    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:09.406652    9488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-207300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-207300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-207300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:33:09.560537    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:33:09.560537    9488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 22:33:09.560537    9488 buildroot.go:174] setting up certificates
	I0731 22:33:09.560537    9488 provision.go:84] configureAuth start
	I0731 22:33:09.560537    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:11.645999    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:11.645999    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:11.645999    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:14.150076    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:14.150076    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:14.150076    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:16.228619    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:16.228779    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:16.228779    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:18.724950    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:18.724950    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:18.724950    9488 provision.go:143] copyHostCerts
	I0731 22:33:18.725831    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 22:33:18.725955    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 22:33:18.725955    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 22:33:18.726519    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 22:33:18.727784    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 22:33:18.727899    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 22:33:18.727899    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 22:33:18.728426    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 22:33:18.729715    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 22:33:18.730163    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 22:33:18.730216    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 22:33:18.730216    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 22:33:18.731714    9488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-207300-m02 san=[127.0.0.1 172.17.28.136 ha-207300-m02 localhost minikube]
	I0731 22:33:18.857172    9488 provision.go:177] copyRemoteCerts
	I0731 22:33:18.872231    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:33:18.872231    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:21.020077    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:21.020548    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:21.020636    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:23.525875    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:23.525875    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:23.525875    9488 sshutil.go:53] new ssh client: &{IP:172.17.28.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\id_rsa Username:docker}
	I0731 22:33:23.636615    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7642607s)
	I0731 22:33:23.636615    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 22:33:23.637143    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:33:23.683719    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 22:33:23.683891    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 22:33:23.727342    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 22:33:23.727342    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 22:33:23.775261    9488 provision.go:87] duration metric: took 14.214542s to configureAuth
	I0731 22:33:23.775391    9488 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:33:23.776156    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:33:23.776288    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:25.872775    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:25.872775    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:25.873049    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:28.371704    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:28.371704    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:28.377885    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:28.378417    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:28.378417    9488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 22:33:28.513292    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 22:33:28.513292    9488 buildroot.go:70] root file system type: tmpfs
	I0731 22:33:28.513507    9488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 22:33:28.513644    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:30.605466    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:30.606279    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:30.606279    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:33.073433    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:33.073433    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:33.079145    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:33.079752    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:33.079956    9488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.21.92"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 22:33:33.233205    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.21.92
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 22:33:33.233205    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:35.340844    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:35.341024    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:35.341024    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:37.812387    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:37.813436    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:37.819057    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:37.819764    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:37.819764    9488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 22:33:40.064678    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 22:33:40.064678    9488 machine.go:97] duration metric: took 44.8684806s to provisionDockerMachine
	I0731 22:33:40.064678    9488 client.go:171] duration metric: took 1m54.7192249s to LocalClient.Create
	I0731 22:33:40.064678    9488 start.go:167] duration metric: took 1m54.7192249s to libmachine.API.Create "ha-207300"
	I0731 22:33:40.064678    9488 start.go:293] postStartSetup for "ha-207300-m02" (driver="hyperv")
	I0731 22:33:40.064678    9488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:33:40.077684    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:33:40.077684    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:42.201630    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:42.202400    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:42.202400    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:44.638075    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:44.638330    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:44.638766    9488 sshutil.go:53] new ssh client: &{IP:172.17.28.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\id_rsa Username:docker}
	I0731 22:33:44.740188    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6623232s)
	I0731 22:33:44.750770    9488 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:33:44.757841    9488 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:33:44.757841    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 22:33:44.758256    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 22:33:44.759253    9488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 22:33:44.759307    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 22:33:44.770650    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:33:44.788954    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 22:33:44.831059    9488 start.go:296] duration metric: took 4.7663197s for postStartSetup
	I0731 22:33:44.834064    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:46.924567    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:46.924778    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:46.924778    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:49.376115    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:49.376115    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:49.376817    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:33:49.379357    9488 start.go:128] duration metric: took 2m4.0387793s to createHost
	I0731 22:33:49.379357    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:51.488048    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:51.488208    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:51.488208    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:53.951762    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:53.951762    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:53.956837    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:53.957534    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:53.957534    9488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:33:54.086396    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465234.107009733
	
	I0731 22:33:54.086396    9488 fix.go:216] guest clock: 1722465234.107009733
	I0731 22:33:54.086396    9488 fix.go:229] Guest: 2024-07-31 22:33:54.107009733 +0000 UTC Remote: 2024-07-31 22:33:49.3793576 +0000 UTC m=+329.131581301 (delta=4.727652133s)
	I0731 22:33:54.086396    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:56.174678    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:56.174678    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:56.174950    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:58.646546    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:58.646753    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:58.652287    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:58.653074    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:58.653074    9488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722465234
	I0731 22:33:58.795094    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 22:33:54 UTC 2024
	
	I0731 22:33:58.795159    9488 fix.go:236] clock set: Wed Jul 31 22:33:54 UTC 2024
	 (err=<nil>)
	I0731 22:33:58.795159    9488 start.go:83] releasing machines lock for "ha-207300-m02", held for 2m13.4544611s
	I0731 22:33:58.795440    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:34:00.879249    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:00.879249    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:00.879437    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:03.392217    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:34:03.393129    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:03.398175    9488 out.go:177] * Found network options:
	I0731 22:34:03.401366    9488 out.go:177]   - NO_PROXY=172.17.21.92
	W0731 22:34:03.404143    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:34:03.406610    9488 out.go:177]   - NO_PROXY=172.17.21.92
	W0731 22:34:03.408556    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:34:03.409525    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:34:03.412528    9488 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 22:34:03.412528    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:34:03.422527    9488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 22:34:03.422527    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:34:05.603306    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:05.603306    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:05.603306    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:05.603441    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:05.603441    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:05.603573    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:08.197049    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:34:08.197049    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:08.197848    9488 sshutil.go:53] new ssh client: &{IP:172.17.28.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\id_rsa Username:docker}
	I0731 22:34:08.214981    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:34:08.214981    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:08.215537    9488 sshutil.go:53] new ssh client: &{IP:172.17.28.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\id_rsa Username:docker}
	I0731 22:34:08.289575    9488 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8769854s)
	W0731 22:34:08.289575    9488 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 22:34:08.307084    9488 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8844945s)
	W0731 22:34:08.308064    9488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:34:08.320649    9488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:34:08.346718    9488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:34:08.346718    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:34:08.347683    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:34:08.393332    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0731 22:34:08.405573    9488 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 22:34:08.405843    9488 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 22:34:08.425203    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 22:34:08.446050    9488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 22:34:08.456837    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 22:34:08.486959    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:34:08.515712    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 22:34:08.548025    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:34:08.580806    9488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:34:08.611467    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 22:34:08.642152    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 22:34:08.670942    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 22:34:08.700239    9488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:34:08.728504    9488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:34:08.757105    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:08.929125    9488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 22:34:08.957558    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:34:08.968354    9488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 22:34:09.000064    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:34:09.029492    9488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:34:09.074099    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:34:09.107080    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:34:09.141664    9488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 22:34:09.201677    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:34:09.223438    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:34:09.268180    9488 ssh_runner.go:195] Run: which cri-dockerd
	I0731 22:34:09.284435    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 22:34:09.302149    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 22:34:09.352647    9488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 22:34:09.545846    9488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 22:34:09.721336    9488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 22:34:09.721447    9488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 22:34:09.768087    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:09.965282    9488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 22:34:12.519645    9488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5543304s)
	I0731 22:34:12.531683    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 22:34:12.564079    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:34:12.596628    9488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 22:34:12.788924    9488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 22:34:12.968374    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:13.159740    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 22:34:13.196480    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:34:13.230968    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:13.421009    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 22:34:13.537022    9488 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 22:34:13.549040    9488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 22:34:13.558788    9488 start.go:563] Will wait 60s for crictl version
	I0731 22:34:13.571894    9488 ssh_runner.go:195] Run: which crictl
	I0731 22:34:13.588905    9488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:34:13.646588    9488 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 22:34:13.655272    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:34:13.698780    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:34:13.737974    9488 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 22:34:13.741019    9488 out.go:177]   - env NO_PROXY=172.17.21.92
	I0731 22:34:13.743628    9488 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 22:34:13.747384    9488 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 22:34:13.748334    9488 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 22:34:13.748334    9488 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 22:34:13.748334    9488 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 22:34:13.750649    9488 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 22:34:13.750649    9488 ip.go:210] interface addr: 172.17.16.1/20
	I0731 22:34:13.765420    9488 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 22:34:13.771337    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:34:13.792602    9488 mustload.go:65] Loading cluster: ha-207300
	I0731 22:34:13.793258    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:34:13.793743    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:34:15.904890    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:15.904890    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:15.904890    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:34:15.905657    9488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300 for IP: 172.17.28.136
	I0731 22:34:15.905657    9488 certs.go:194] generating shared ca certs ...
	I0731 22:34:15.905657    9488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:34:15.906556    9488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 22:34:15.906764    9488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 22:34:15.907307    9488 certs.go:256] generating profile certs ...
	I0731 22:34:15.907500    9488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key
	I0731 22:34:15.908040    9488 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.48f058e2
	I0731 22:34:15.908204    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.48f058e2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.21.92 172.17.28.136 172.17.31.254]
	I0731 22:34:16.052368    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.48f058e2 ...
	I0731 22:34:16.052368    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.48f058e2: {Name:mk6848f579dde66d07ff396b0f8e1aa80ebe6f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:34:16.054652    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.48f058e2 ...
	I0731 22:34:16.054652    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.48f058e2: {Name:mk67e2845ee35c8c5d6eb5e9e1119a41a08fae97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:34:16.055234    9488 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.48f058e2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt
	I0731 22:34:16.070735    9488 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.48f058e2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key
	I0731 22:34:16.072508    9488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key
	I0731 22:34:16.072508    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:34:16.073041    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:34:16.073130    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:34:16.073130    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:34:16.073130    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:34:16.073130    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:34:16.073889    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:34:16.074516    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:34:16.074516    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 22:34:16.075358    9488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 22:34:16.075477    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 22:34:16.075839    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 22:34:16.076208    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 22:34:16.076522    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 22:34:16.076610    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 22:34:16.077205    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 22:34:16.077352    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:34:16.077559    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 22:34:16.077809    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:34:18.247403    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:18.247403    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:18.247745    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:20.785003    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:34:20.785003    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:20.785074    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:34:20.886039    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 22:34:20.892577    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 22:34:20.926046    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 22:34:20.937429    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0731 22:34:20.968251    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 22:34:20.975004    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 22:34:21.008701    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 22:34:21.016412    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 22:34:21.049502    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 22:34:21.055607    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 22:34:21.086799    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 22:34:21.092650    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 22:34:21.111241    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:34:21.160274    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:34:21.204452    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:34:21.254957    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 22:34:21.299674    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 22:34:21.346523    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 22:34:21.395038    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:34:21.446000    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:34:21.491370    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 22:34:21.537791    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:34:21.582974    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 22:34:21.631284    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 22:34:21.659795    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0731 22:34:21.688923    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 22:34:21.719195    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 22:34:21.748950    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 22:34:21.778337    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 22:34:21.807017    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 22:34:21.851036    9488 ssh_runner.go:195] Run: openssl version
	I0731 22:34:21.873286    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 22:34:21.902791    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 22:34:21.910104    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 22:34:21.920316    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 22:34:21.942242    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:34:21.973783    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:34:22.007640    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:34:22.013885    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:34:22.025224    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:34:22.047232    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:34:22.076713    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 22:34:22.106818    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 22:34:22.113564    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 22:34:22.125269    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 22:34:22.144629    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 22:34:22.175252    9488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:34:22.182261    9488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:34:22.182469    9488 kubeadm.go:934] updating node {m02 172.17.28.136 8443 v1.30.3 docker true true} ...
	I0731 22:34:22.182757    9488 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-207300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.28.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:34:22.182757    9488 kube-vip.go:115] generating kube-vip config ...
	I0731 22:34:22.193866    9488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:34:22.222490    9488 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:34:22.222490    9488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:34:22.235639    9488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:34:22.250822    9488 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 22:34:22.263769    9488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 22:34:22.284827    9488 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet
	I0731 22:34:22.284827    9488 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm
	I0731 22:34:22.284942    9488 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl
	I0731 22:34:23.516723    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:34:23.530934    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:34:23.539308    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 22:34:23.539591    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 22:34:28.189806    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:34:28.202298    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:34:28.211426    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 22:34:28.211426    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 22:34:33.004238    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:34:33.026887    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:34:33.039506    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:34:33.046395    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 22:34:33.046395    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 22:34:33.641433    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 22:34:33.658690    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 22:34:33.688056    9488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:34:33.720469    9488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0731 22:34:33.770738    9488 ssh_runner.go:195] Run: grep 172.17.31.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:34:33.777508    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:34:33.808590    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:33.998606    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:34:34.034312    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:34:34.034568    9488 start.go:317] joinCluster: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:34:34.034568    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 22:34:34.035457    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:34:36.192511    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:36.193016    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:36.193016    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:38.708440    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:34:38.708440    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:38.709406    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:34:39.406557    9488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3710227s)
	I0731 22:34:39.406606    9488 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:34:39.406711    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kpm2id.8a3cjgor80eivp07 --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-207300-m02 --control-plane --apiserver-advertise-address=172.17.28.136 --apiserver-bind-port=8443"
	I0731 22:35:20.418539    9488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kpm2id.8a3cjgor80eivp07 --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-207300-m02 --control-plane --apiserver-advertise-address=172.17.28.136 --apiserver-bind-port=8443": (41.0107538s)
	I0731 22:35:20.418651    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 22:35:21.189274    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-207300-m02 minikube.k8s.io/updated_at=2024_07_31T22_35_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-207300 minikube.k8s.io/primary=false
	I0731 22:35:21.366833    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-207300-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 22:35:21.524598    9488 start.go:319] duration metric: took 47.4894264s to joinCluster
	I0731 22:35:21.524689    9488 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:35:21.525364    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:35:21.529866    9488 out.go:177] * Verifying Kubernetes components...
	I0731 22:35:21.545209    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:35:21.872507    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:35:21.924583    9488 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:35:21.925338    9488 kapi.go:59] client config for ha-207300: &rest.Config{Host:"https://172.17.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 22:35:21.925518    9488 kubeadm.go:483] Overriding stale ClientConfig host https://172.17.31.254:8443 with https://172.17.21.92:8443
	I0731 22:35:21.925943    9488 node_ready.go:35] waiting up to 6m0s for node "ha-207300-m02" to be "Ready" ...
	I0731 22:35:21.926601    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:21.926601    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:21.926601    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:21.926601    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:21.950993    9488 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0731 22:35:22.431992    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:22.431992    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:22.431992    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:22.431992    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:22.437718    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:22.942287    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:22.942287    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:22.942356    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:22.942356    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:22.948151    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:23.434328    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:23.434472    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:23.434540    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:23.434630    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:23.439834    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:23.941224    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:23.941224    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:23.941224    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:23.941224    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:23.944598    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:23.946039    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:24.440628    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:24.440704    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:24.440704    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:24.440704    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:24.454556    9488 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0731 22:35:24.928170    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:24.928283    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:24.928283    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:24.928283    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:24.933218    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:25.434033    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:25.434118    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:25.434118    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:25.434118    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:25.439722    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:25.938844    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:25.938844    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:25.939053    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:25.939053    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:25.943269    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:26.431086    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:26.431086    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:26.431086    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:26.431086    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:26.436665    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:26.437670    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:26.940808    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:26.940808    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:26.940808    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:26.940808    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:26.945505    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:27.433124    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:27.433124    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:27.433124    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:27.433124    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:27.438656    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:27.939564    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:27.939857    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:27.939857    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:27.939857    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:27.945266    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:28.430056    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:28.430227    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:28.430290    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:28.430290    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:28.580301    9488 round_trippers.go:574] Response Status: 200 OK in 150 milliseconds
	I0731 22:35:28.583364    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:28.935461    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:28.935564    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:28.935564    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:28.935564    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:28.940187    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:29.439830    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:29.439830    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:29.439830    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:29.439830    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:29.444016    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:29.928246    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:29.928246    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:29.928246    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:29.928246    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:29.991875    9488 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0731 22:35:30.428067    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:30.428173    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:30.428173    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:30.428173    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:30.474914    9488 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0731 22:35:30.932850    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:30.933134    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:30.933134    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:30.933134    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:30.940831    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:30.942387    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:31.433279    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:31.433279    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:31.433279    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:31.433279    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:31.440212    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:31.933852    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:31.934076    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:31.934076    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:31.934076    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:31.938924    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:32.432197    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:32.432197    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:32.432197    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:32.432197    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:32.437534    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:32.932277    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:32.932335    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:32.932335    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:32.932335    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:32.936614    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:33.433241    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:33.433241    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:33.433241    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:33.433325    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:33.440904    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:33.442238    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:33.934157    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:33.934271    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:33.934271    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:33.934271    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:33.946673    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:35:34.434479    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:34.434479    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:34.434479    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:34.434479    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:34.439106    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:34.934864    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:34.934864    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:34.934864    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:34.934864    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:34.940028    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:35.433408    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:35.433530    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:35.433530    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:35.433530    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:35.439049    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:35.939155    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:35.939237    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:35.939237    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:35.939237    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:35.943596    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:35.944314    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:36.427315    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:36.427315    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:36.427315    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:36.427315    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:36.432847    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:36.928683    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:36.928683    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:36.928683    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:36.928683    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:36.933183    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:37.431032    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:37.431343    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:37.431408    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:37.431408    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:37.435956    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:37.939082    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:37.939082    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:37.939082    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:37.939082    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:37.945448    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:37.946217    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:38.438886    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:38.438970    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:38.439010    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:38.439010    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:38.443653    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:38.938723    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:38.938926    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:38.938926    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:38.938926    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:38.943558    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:39.436728    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:39.436728    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:39.436728    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:39.436728    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:39.441380    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:39.936511    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:39.936852    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:39.936852    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:39.936852    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:39.941132    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:40.437345    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:40.437345    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:40.437445    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:40.437445    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:40.442710    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:40.443867    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:40.936527    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:40.936605    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:40.936605    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:40.936605    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:40.941893    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:41.435674    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:41.435674    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:41.435674    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:41.435674    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:41.441359    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:41.940538    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:41.940538    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:41.940538    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:41.940538    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:41.945201    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:42.438880    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:42.438880    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:42.438880    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:42.438880    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:42.445609    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:42.446977    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:42.937906    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:42.937906    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:42.937992    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:42.937992    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:42.942264    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:43.438860    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:43.438860    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.438860    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.438860    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.444916    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:43.926453    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:43.926759    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.926759    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.926759    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.931230    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:43.933087    9488 node_ready.go:49] node "ha-207300-m02" has status "Ready":"True"
	I0731 22:35:43.933146    9488 node_ready.go:38] duration metric: took 22.0068644s for node "ha-207300-m02" to be "Ready" ...
	I0731 22:35:43.933146    9488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:35:43.933233    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:43.933333    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.933333    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.933333    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.940239    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:43.950192    9488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.950773    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76ftg
	I0731 22:35:43.950940    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.950940    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.950940    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.956797    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:43.957908    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:43.957952    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.957952    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.958173    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.960936    9488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:35:43.962261    9488 pod_ready.go:92] pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:43.962261    9488 pod_ready.go:81] duration metric: took 11.4872ms for pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.962261    9488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.962261    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8xt8f
	I0731 22:35:43.962261    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.962261    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.962261    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.967734    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:43.968699    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:43.968699    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.968699    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.968699    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.971921    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.973007    9488 pod_ready.go:92] pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:43.973007    9488 pod_ready.go:81] duration metric: took 10.7456ms for pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.973007    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.973393    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300
	I0731 22:35:43.973393    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.973393    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.973393    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.977243    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.978340    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:43.978424    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.978424    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.978424    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.982096    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.983081    9488 pod_ready.go:92] pod "etcd-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:43.983373    9488 pod_ready.go:81] duration metric: took 10.1691ms for pod "etcd-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.983373    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.983467    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300-m02
	I0731 22:35:43.983559    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.983559    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.983589    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.986750    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.987979    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:43.987979    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.987979    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.987979    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.991580    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.992416    9488 pod_ready.go:92] pod "etcd-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:43.992494    9488 pod_ready.go:81] duration metric: took 9.1208ms for pod "etcd-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.992494    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.127498    9488 request.go:629] Waited for 134.6322ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300
	I0731 22:35:44.127498    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300
	I0731 22:35:44.127583    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.127583    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.127583    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.137513    9488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 22:35:44.331063    9488 request.go:629] Waited for 192.8182ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:44.331425    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:44.331425    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.331425    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.331553    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.335813    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:44.336853    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:44.337029    9488 pod_ready.go:81] duration metric: took 344.5314ms for pod "kube-apiserver-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.337029    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.535399    9488 request.go:629] Waited for 198.1243ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m02
	I0731 22:35:44.535776    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m02
	I0731 22:35:44.535776    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.535776    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.535776    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.540357    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:44.737649    9488 request.go:629] Waited for 195.4463ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:44.738013    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:44.738013    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.738013    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.738013    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.743093    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:44.744343    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:44.744430    9488 pod_ready.go:81] duration metric: took 407.3954ms for pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.744430    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.940396    9488 request.go:629] Waited for 195.742ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300
	I0731 22:35:44.940790    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300
	I0731 22:35:44.940790    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.940790    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.940790    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.953555    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:35:45.127368    9488 request.go:629] Waited for 172.7575ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:45.127861    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:45.127861    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.127861    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.127861    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.133173    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:45.133888    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:45.133888    9488 pod_ready.go:81] duration metric: took 389.4527ms for pod "kube-controller-manager-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.133888    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.330511    9488 request.go:629] Waited for 195.8988ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m02
	I0731 22:35:45.330764    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m02
	I0731 22:35:45.330764    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.330764    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.330764    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.335195    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:45.535627    9488 request.go:629] Waited for 198.6258ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:45.535746    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:45.535746    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.535746    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.535746    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.541592    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:45.542534    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:45.542534    9488 pod_ready.go:81] duration metric: took 408.6405ms for pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.542608    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-htmnf" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.739119    9488 request.go:629] Waited for 195.7107ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-htmnf
	I0731 22:35:45.739119    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-htmnf
	I0731 22:35:45.739119    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.739119    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.739119    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.744986    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:45.928317    9488 request.go:629] Waited for 182.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:45.928650    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:45.928720    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.928720    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.928720    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.933720    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:45.934510    9488 pod_ready.go:92] pod "kube-proxy-htmnf" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:45.934510    9488 pod_ready.go:81] duration metric: took 391.8967ms for pod "kube-proxy-htmnf" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.934510    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5gbs" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.131266    9488 request.go:629] Waited for 196.5224ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5gbs
	I0731 22:35:46.131351    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5gbs
	I0731 22:35:46.131548    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.131548    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.131548    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.136296    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:46.335070    9488 request.go:629] Waited for 197.2709ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:46.335327    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:46.335327    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.335327    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.335327    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.340431    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:46.341834    9488 pod_ready.go:92] pod "kube-proxy-z5gbs" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:46.341834    9488 pod_ready.go:81] duration metric: took 407.3191ms for pod "kube-proxy-z5gbs" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.341834    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.537160    9488 request.go:629] Waited for 195.3234ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300
	I0731 22:35:46.537537    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300
	I0731 22:35:46.537729    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.537729    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.537729    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.545635    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:46.740481    9488 request.go:629] Waited for 194.0038ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:46.740667    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:46.740667    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.740667    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.740769    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.751063    9488 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 22:35:46.752805    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:46.752805    9488 pod_ready.go:81] duration metric: took 410.9658ms for pod "kube-scheduler-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.752805    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.928652    9488 request.go:629] Waited for 175.6863ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m02
	I0731 22:35:46.928820    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m02
	I0731 22:35:46.928906    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.928906    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.928906    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.933192    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:47.129733    9488 request.go:629] Waited for 195.4818ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:47.129733    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:47.129971    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.129971    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.129971    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.134352    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:47.135099    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:47.135630    9488 pod_ready.go:81] duration metric: took 382.7192ms for pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:47.135630    9488 pod_ready.go:38] duration metric: took 3.2024427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:35:47.135630    9488 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:35:47.149378    9488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:35:47.177875    9488 api_server.go:72] duration metric: took 25.6527504s to wait for apiserver process to appear ...
	I0731 22:35:47.177875    9488 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:35:47.177875    9488 api_server.go:253] Checking apiserver healthz at https://172.17.21.92:8443/healthz ...
	I0731 22:35:47.187967    9488 api_server.go:279] https://172.17.21.92:8443/healthz returned 200:
	ok
	I0731 22:35:47.188092    9488 round_trippers.go:463] GET https://172.17.21.92:8443/version
	I0731 22:35:47.188092    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.188092    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.188092    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.189678    9488 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 22:35:47.190390    9488 api_server.go:141] control plane version: v1.30.3
	I0731 22:35:47.190435    9488 api_server.go:131] duration metric: took 12.5598ms to wait for apiserver health ...
	I0731 22:35:47.190435    9488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:35:47.334613    9488 request.go:629] Waited for 143.6815ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:47.334613    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:47.334613    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.334613    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.334613    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.343260    9488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 22:35:47.351103    9488 system_pods.go:59] 17 kube-system pods found
	I0731 22:35:47.351103    9488 system_pods.go:61] "coredns-7db6d8ff4d-76ftg" [bf92d1a7-935b-4c9a-b8bd-30ae3361df12] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "coredns-7db6d8ff4d-8xt8f" [df01f8c6-b706-4225-8470-1fbdf9828343] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "etcd-ha-207300" [e8d252ff-ddb3-4c99-a761-31c9c9f1b878] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "etcd-ha-207300-m02" [c3906bb1-a736-42d5-a6c5-2b2011e96095] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "kindnet-kz4x6" [7a9f0cc3-761c-43dc-8762-1adaff90efa2] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "kindnet-lmdqz" [9c96c91b-0a25-4cfd-be3a-5a843e9bed74] Running
	I0731 22:35:47.351619    9488 system_pods.go:61] "kube-apiserver-ha-207300" [eb0e0730-5fd4-41b6-8126-ab6e97ef3838] Running
	I0731 22:35:47.351741    9488 system_pods.go:61] "kube-apiserver-ha-207300-m02" [ed634f14-62de-4ec5-af02-8fbcb10ea3bf] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-controller-manager-ha-207300" [42d3dea7-1f64-4c4e-b700-eafb129dc8de] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-controller-manager-ha-207300-m02" [c630fba1-2a98-4176-aa73-c4dfc5602505] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-proxy-htmnf" [e5ac19af-40fc-448c-8c47-45bcff41ad20] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-proxy-z5gbs" [156fcdf2-9a4c-4f9b-bf4f-dfa2a48e3cbc] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-scheduler-ha-207300" [29ce7842-7630-492e-adcc-1cb0837afe4d] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-scheduler-ha-207300-m02" [5ce1215b-baaf-42e2-be58-4b8850ca3e9d] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-vip-ha-207300" [f8d305a0-e7ef-4336-9a79-0052678c97cd] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-vip-ha-207300-m02" [47e8411b-e8ae-4561-95a9-b2957d56505b] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "storage-provisioner" [47da608c-5f75-43ea-8403-56b00ff33fd1] Running
	I0731 22:35:47.351870    9488 system_pods.go:74] duration metric: took 161.4325ms to wait for pod list to return data ...
	I0731 22:35:47.351870    9488 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:35:47.527068    9488 request.go:629] Waited for 175.1961ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:35:47.527068    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:35:47.527068    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.527068    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.527068    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.533253    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:47.533699    9488 default_sa.go:45] found service account: "default"
	I0731 22:35:47.533776    9488 default_sa.go:55] duration metric: took 181.9036ms for default service account to be created ...
	I0731 22:35:47.533776    9488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:35:47.739837    9488 request.go:629] Waited for 205.8422ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:47.739837    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:47.739837    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.739837    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.739837    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.747834    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:47.755035    9488 system_pods.go:86] 17 kube-system pods found
	I0731 22:35:47.755035    9488 system_pods.go:89] "coredns-7db6d8ff4d-76ftg" [bf92d1a7-935b-4c9a-b8bd-30ae3361df12] Running
	I0731 22:35:47.755572    9488 system_pods.go:89] "coredns-7db6d8ff4d-8xt8f" [df01f8c6-b706-4225-8470-1fbdf9828343] Running
	I0731 22:35:47.755572    9488 system_pods.go:89] "etcd-ha-207300" [e8d252ff-ddb3-4c99-a761-31c9c9f1b878] Running
	I0731 22:35:47.755572    9488 system_pods.go:89] "etcd-ha-207300-m02" [c3906bb1-a736-42d5-a6c5-2b2011e96095] Running
	I0731 22:35:47.755610    9488 system_pods.go:89] "kindnet-kz4x6" [7a9f0cc3-761c-43dc-8762-1adaff90efa2] Running
	I0731 22:35:47.755610    9488 system_pods.go:89] "kindnet-lmdqz" [9c96c91b-0a25-4cfd-be3a-5a843e9bed74] Running
	I0731 22:35:47.755610    9488 system_pods.go:89] "kube-apiserver-ha-207300" [eb0e0730-5fd4-41b6-8126-ab6e97ef3838] Running
	I0731 22:35:47.755610    9488 system_pods.go:89] "kube-apiserver-ha-207300-m02" [ed634f14-62de-4ec5-af02-8fbcb10ea3bf] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-controller-manager-ha-207300" [42d3dea7-1f64-4c4e-b700-eafb129dc8de] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-controller-manager-ha-207300-m02" [c630fba1-2a98-4176-aa73-c4dfc5602505] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-proxy-htmnf" [e5ac19af-40fc-448c-8c47-45bcff41ad20] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-proxy-z5gbs" [156fcdf2-9a4c-4f9b-bf4f-dfa2a48e3cbc] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-scheduler-ha-207300" [29ce7842-7630-492e-adcc-1cb0837afe4d] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-scheduler-ha-207300-m02" [5ce1215b-baaf-42e2-be58-4b8850ca3e9d] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-vip-ha-207300" [f8d305a0-e7ef-4336-9a79-0052678c97cd] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-vip-ha-207300-m02" [47e8411b-e8ae-4561-95a9-b2957d56505b] Running
	I0731 22:35:47.755722    9488 system_pods.go:89] "storage-provisioner" [47da608c-5f75-43ea-8403-56b00ff33fd1] Running
	I0731 22:35:47.755722    9488 system_pods.go:126] duration metric: took 221.9437ms to wait for k8s-apps to be running ...
	I0731 22:35:47.755722    9488 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:35:47.768176    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:35:47.792677    9488 system_svc.go:56] duration metric: took 36.9545ms WaitForService to wait for kubelet
	I0731 22:35:47.792735    9488 kubeadm.go:582] duration metric: took 26.2676027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:35:47.792824    9488 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:35:47.941797    9488 request.go:629] Waited for 148.9113ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes
	I0731 22:35:47.941982    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes
	I0731 22:35:47.941982    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.941982    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.941982    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.949390    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:47.950110    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:35:47.950110    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:35:47.950110    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:35:47.950110    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:35:47.950110    9488 node_conditions.go:105] duration metric: took 157.2847ms to run NodePressure ...
	I0731 22:35:47.950110    9488 start.go:241] waiting for startup goroutines ...
	I0731 22:35:47.950110    9488 start.go:255] writing updated cluster config ...
	I0731 22:35:47.955043    9488 out.go:177] 
	I0731 22:35:47.970457    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:35:47.970457    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:35:47.978709    9488 out.go:177] * Starting "ha-207300-m03" control-plane node in "ha-207300" cluster
	I0731 22:35:47.981408    9488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 22:35:47.981408    9488 cache.go:56] Caching tarball of preloaded images
	I0731 22:35:47.982106    9488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 22:35:47.982106    9488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 22:35:47.982106    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:35:47.988064    9488 start.go:360] acquireMachinesLock for ha-207300-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:35:47.988393    9488 start.go:364] duration metric: took 329.1µs to acquireMachinesLock for "ha-207300-m03"
	I0731 22:35:47.988713    9488 start.go:93] Provisioning new machine with config: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:35:47.988770    9488 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0731 22:35:47.992938    9488 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:35:47.993291    9488 start.go:159] libmachine.API.Create for "ha-207300" (driver="hyperv")
	I0731 22:35:47.993381    9488 client.go:168] LocalClient.Create starting
	I0731 22:35:47.993905    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 22:35:47.994188    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:35:47.994188    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:35:47.994402    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 22:35:47.994673    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:35:47.994749    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:35:47.994807    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 22:35:49.863156    9488 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 22:35:49.863156    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:35:49.863156    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 22:35:51.579446    9488 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 22:35:51.579446    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:35:51.579699    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:35:53.076871    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:35:53.076871    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:35:53.076960    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:35:56.795994    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:35:56.795994    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:35:56.798779    9488 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:35:57.240613    9488 main.go:141] libmachine: Creating SSH key...
	I0731 22:35:57.538184    9488 main.go:141] libmachine: Creating VM...
	I0731 22:35:57.539189    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:36:00.428942    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:36:00.429009    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:00.429123    9488 main.go:141] libmachine: Using switch "Default Switch"
	I0731 22:36:00.429166    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:36:02.174101    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:36:02.175149    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:02.175149    9488 main.go:141] libmachine: Creating VHD
	I0731 22:36:02.175287    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 22:36:05.954790    9488 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F8A42F50-25AA-47EC-8979-49537C925629
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 22:36:05.955779    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:05.955779    9488 main.go:141] libmachine: Writing magic tar header
	I0731 22:36:05.955779    9488 main.go:141] libmachine: Writing SSH key tar header
	I0731 22:36:05.970156    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 22:36:09.235871    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:09.235871    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:09.236528    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\disk.vhd' -SizeBytes 20000MB
	I0731 22:36:11.858569    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:11.858569    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:11.858569    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-207300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0731 22:36:15.517490    9488 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-207300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 22:36:15.517848    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:15.517988    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-207300-m03 -DynamicMemoryEnabled $false
	I0731 22:36:17.843170    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:17.844314    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:17.844395    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-207300-m03 -Count 2
	I0731 22:36:20.022845    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:20.022845    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:20.023166    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-207300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\boot2docker.iso'
	I0731 22:36:22.614596    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:22.615193    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:22.617253    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-207300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\disk.vhd'
	I0731 22:36:25.292881    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:25.293295    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:25.293295    9488 main.go:141] libmachine: Starting VM...
	I0731 22:36:25.293295    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-207300-m03
	I0731 22:36:28.387819    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:28.388125    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:28.388125    9488 main.go:141] libmachine: Waiting for host to start...
	I0731 22:36:28.388200    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:30.701611    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:30.701648    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:30.701648    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:33.271812    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:33.271812    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:34.282959    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:36.517316    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:36.517515    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:36.517515    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:39.077587    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:39.077587    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:40.091768    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:42.310468    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:42.311179    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:42.311262    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:44.845807    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:44.846235    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:45.851974    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:48.102801    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:48.102801    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:48.102801    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:50.758140    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:50.758174    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:51.768442    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:54.027028    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:54.027028    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:54.027028    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:56.606839    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:36:56.606839    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:56.606839    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:58.772984    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:58.773885    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:58.773885    9488 machine.go:94] provisionDockerMachine start ...
	I0731 22:36:58.774158    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:00.971846    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:00.971846    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:00.972000    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:03.523877    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:03.523877    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:03.531195    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:03.543089    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:03.543089    9488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:37:03.687934    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 22:37:03.688034    9488 buildroot.go:166] provisioning hostname "ha-207300-m03"
	I0731 22:37:03.688115    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:05.863860    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:05.863860    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:05.864050    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:08.467287    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:08.467287    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:08.474067    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:08.474203    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:08.474203    9488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-207300-m03 && echo "ha-207300-m03" | sudo tee /etc/hostname
	I0731 22:37:08.626461    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-207300-m03
	
	I0731 22:37:08.626461    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:10.764048    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:10.764048    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:10.764048    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:13.302675    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:13.302675    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:13.307869    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:13.308400    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:13.308400    9488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-207300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-207300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-207300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:37:13.452814    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:37:13.452814    9488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 22:37:13.452814    9488 buildroot.go:174] setting up certificates
	I0731 22:37:13.452814    9488 provision.go:84] configureAuth start
	I0731 22:37:13.452814    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:15.576665    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:15.577655    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:15.577778    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:18.122403    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:18.122531    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:18.122531    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:20.257824    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:20.258100    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:20.258192    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:22.832950    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:22.834390    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:22.834434    9488 provision.go:143] copyHostCerts
	I0731 22:37:22.834650    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 22:37:22.835178    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 22:37:22.835178    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 22:37:22.835715    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 22:37:22.837345    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 22:37:22.837631    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 22:37:22.837738    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 22:37:22.838156    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 22:37:22.839556    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 22:37:22.839947    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 22:37:22.840089    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 22:37:22.840611    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 22:37:22.841861    9488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-207300-m03 san=[127.0.0.1 172.17.27.253 ha-207300-m03 localhost minikube]
	I0731 22:37:22.938174    9488 provision.go:177] copyRemoteCerts
	I0731 22:37:22.955709    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:37:22.955709    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:25.094045    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:25.094548    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:25.094548    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:27.676229    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:27.676229    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:27.676229    9488 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:37:27.782588    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8268174s)
	I0731 22:37:27.782588    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 22:37:27.783365    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 22:37:27.829689    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 22:37:27.830096    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:37:27.873043    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 22:37:27.873362    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:37:27.915544    9488 provision.go:87] duration metric: took 14.4624758s to configureAuth
	I0731 22:37:27.915742    9488 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:37:27.916413    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:37:27.916477    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:30.048834    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:30.048834    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:30.049185    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:32.584705    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:32.584705    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:32.590486    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:32.591151    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:32.591151    9488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 22:37:32.708807    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 22:37:32.708940    9488 buildroot.go:70] root file system type: tmpfs
	I0731 22:37:32.709141    9488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 22:37:32.709141    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:34.838829    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:34.838829    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:34.839467    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:37.382578    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:37.382578    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:37.389343    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:37.390002    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:37.390002    9488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.21.92"
	Environment="NO_PROXY=172.17.21.92,172.17.28.136"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 22:37:37.535843    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.21.92
	Environment=NO_PROXY=172.17.21.92,172.17.28.136
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 22:37:37.536008    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:39.656878    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:39.657936    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:39.658019    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:42.178995    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:42.178995    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:42.185141    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:42.185957    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:42.185957    9488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 22:37:44.399048    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 22:37:44.399112    9488 machine.go:97] duration metric: took 45.6246476s to provisionDockerMachine
	I0731 22:37:44.399204    9488 client.go:171] duration metric: took 1m56.4042867s to LocalClient.Create
	I0731 22:37:44.399271    9488 start.go:167] duration metric: took 1m56.4045003s to libmachine.API.Create "ha-207300"
	I0731 22:37:44.399379    9488 start.go:293] postStartSetup for "ha-207300-m03" (driver="hyperv")
	I0731 22:37:44.399409    9488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:37:44.412864    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:37:44.412864    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:46.571406    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:46.571406    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:46.572480    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:49.101309    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:49.101309    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:49.102190    9488 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:37:49.208596    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.795671s)
	I0731 22:37:49.220887    9488 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:37:49.228395    9488 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:37:49.228395    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 22:37:49.228395    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 22:37:49.229805    9488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 22:37:49.229805    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 22:37:49.240654    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:37:49.259406    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 22:37:49.308389    9488 start.go:296] duration metric: took 4.9089182s for postStartSetup
	I0731 22:37:49.311498    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:51.461712    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:51.461712    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:51.461712    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:53.995671    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:53.995671    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:53.996412    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:37:53.998927    9488 start.go:128] duration metric: took 2m6.008555s to createHost
	I0731 22:37:53.998927    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:56.127180    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:56.127180    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:56.127398    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:58.677704    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:58.678660    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:58.685210    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:58.685390    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:58.685390    9488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:37:58.812499    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465478.832201479
	
	I0731 22:37:58.812566    9488 fix.go:216] guest clock: 1722465478.832201479
	I0731 22:37:58.812566    9488 fix.go:229] Guest: 2024-07-31 22:37:58.832201479 +0000 UTC Remote: 2024-07-31 22:37:53.9989272 +0000 UTC m=+573.748040801 (delta=4.833274279s)
	I0731 22:37:58.812630    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:38:00.946426    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:00.946426    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:00.946675    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:03.484827    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:38:03.484935    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:03.490940    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:38:03.491579    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:38:03.491579    9488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722465478
	I0731 22:38:03.621316    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 22:37:58 UTC 2024
	
	I0731 22:38:03.621316    9488 fix.go:236] clock set: Wed Jul 31 22:37:58 UTC 2024
	 (err=<nil>)
	I0731 22:38:03.621316    9488 start.go:83] releasing machines lock for "ha-207300-m03", held for 2m15.6310507s
	I0731 22:38:03.621316    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:38:05.821864    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:05.821864    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:05.822223    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:08.377481    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:38:08.378625    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:08.381202    9488 out.go:177] * Found network options:
	I0731 22:38:08.384467    9488 out.go:177]   - NO_PROXY=172.17.21.92,172.17.28.136
	W0731 22:38:08.386829    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:38:08.387088    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:38:08.389187    9488 out.go:177]   - NO_PROXY=172.17.21.92,172.17.28.136
	W0731 22:38:08.391970    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:38:08.391970    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:38:08.392984    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:38:08.392984    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:38:08.395329    9488 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 22:38:08.395329    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:38:08.406380    9488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 22:38:08.406616    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:38:10.621828    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:10.621890    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:10.621890    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:10.638202    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:10.638202    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:10.638202    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:13.369428    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:38:13.369428    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:13.370049    9488 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:38:13.393750    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:38:13.394091    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:13.394229    9488 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:38:13.463314    9488 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0568696s)
	W0731 22:38:13.463385    9488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:38:13.475831    9488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:38:13.480687    9488 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0852932s)
	W0731 22:38:13.480687    9488 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 22:38:13.513595    9488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:38:13.513595    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:38:13.513595    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:38:13.566572    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0731 22:38:13.579744    9488 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 22:38:13.579851    9488 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 22:38:13.604665    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 22:38:13.624648    9488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 22:38:13.635032    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 22:38:13.664996    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:38:13.696282    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 22:38:13.725313    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:38:13.759852    9488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:38:13.791303    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 22:38:13.822923    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 22:38:13.853010    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 22:38:13.886769    9488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:38:13.915493    9488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:38:13.948046    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:14.144913    9488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 22:38:14.175937    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:38:14.187882    9488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 22:38:14.225220    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:38:14.259812    9488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:38:14.303454    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:38:14.341741    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:38:14.376954    9488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 22:38:14.440515    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:38:14.462651    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:38:14.506868    9488 ssh_runner.go:195] Run: which cri-dockerd
	I0731 22:38:14.525517    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 22:38:14.543985    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 22:38:14.587804    9488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 22:38:14.778295    9488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 22:38:14.959687    9488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 22:38:14.959687    9488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 22:38:15.008256    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:15.210518    9488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 22:38:17.804410    9488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5929023s)
	I0731 22:38:17.816430    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 22:38:17.853263    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:38:17.887154    9488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 22:38:18.078419    9488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 22:38:18.285769    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:18.481948    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 22:38:18.521357    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:38:18.556479    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:18.746795    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 22:38:18.851229    9488 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 22:38:18.863736    9488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 22:38:18.873357    9488 start.go:563] Will wait 60s for crictl version
	I0731 22:38:18.886310    9488 ssh_runner.go:195] Run: which crictl
	I0731 22:38:18.904085    9488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:38:18.953573    9488 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 22:38:18.966158    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:38:19.007385    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:38:19.049914    9488 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 22:38:19.052797    9488 out.go:177]   - env NO_PROXY=172.17.21.92
	I0731 22:38:19.055822    9488 out.go:177]   - env NO_PROXY=172.17.21.92,172.17.28.136
	I0731 22:38:19.058706    9488 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 22:38:19.063626    9488 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 22:38:19.063626    9488 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 22:38:19.063626    9488 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 22:38:19.064567    9488 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 22:38:19.069685    9488 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 22:38:19.069791    9488 ip.go:210] interface addr: 172.17.16.1/20
	I0731 22:38:19.081823    9488 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 22:38:19.087529    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:38:19.109003    9488 mustload.go:65] Loading cluster: ha-207300
	I0731 22:38:19.109356    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:38:19.111550    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:38:21.247915    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:21.248404    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:21.248404    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:38:21.249311    9488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300 for IP: 172.17.27.253
	I0731 22:38:21.249439    9488 certs.go:194] generating shared ca certs ...
	I0731 22:38:21.249439    9488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:38:21.250070    9488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 22:38:21.250509    9488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 22:38:21.250652    9488 certs.go:256] generating profile certs ...
	I0731 22:38:21.251523    9488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key
	I0731 22:38:21.251646    9488 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.169207b5
	I0731 22:38:21.251728    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.169207b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.21.92 172.17.28.136 172.17.27.253 172.17.31.254]
	I0731 22:38:21.418588    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.169207b5 ...
	I0731 22:38:21.418588    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.169207b5: {Name:mka680cd694ec31b470fbdebbe35a08239b1d83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:38:21.420533    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.169207b5 ...
	I0731 22:38:21.420533    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.169207b5: {Name:mkb956b21f624b97c9b78796c61257e2a25e2069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:38:21.421539    9488 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.169207b5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt
	I0731 22:38:21.434019    9488 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.169207b5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key
	I0731 22:38:21.437185    9488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key
	I0731 22:38:21.437185    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:38:21.437185    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:38:21.438205    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:38:21.438491    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:38:21.438491    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:38:21.438714    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:38:21.438714    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:38:21.438714    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:38:21.439360    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 22:38:21.439658    9488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 22:38:21.439658    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 22:38:21.439966    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 22:38:21.440240    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 22:38:21.440482    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 22:38:21.440732    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 22:38:21.440732    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 22:38:21.441292    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 22:38:21.441459    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:38:21.441606    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:38:23.595247    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:23.596132    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:23.596132    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:26.149483    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:38:26.149483    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:26.149483    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:38:26.254263    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 22:38:26.262653    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 22:38:26.300121    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 22:38:26.307420    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0731 22:38:26.346166    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 22:38:26.351662    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 22:38:26.381697    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 22:38:26.388772    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 22:38:26.419530    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 22:38:26.425804    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 22:38:26.458978    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 22:38:26.465916    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 22:38:26.484999    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:38:26.536469    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:38:26.580256    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:38:26.622999    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 22:38:26.666978    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 22:38:26.713161    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 22:38:26.761396    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:38:26.806890    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:38:26.856170    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 22:38:26.905475    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 22:38:26.951448    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:38:26.995855    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 22:38:27.027531    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0731 22:38:27.063394    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 22:38:27.094437    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 22:38:27.125319    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 22:38:27.164774    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 22:38:27.195641    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 22:38:27.239891    9488 ssh_runner.go:195] Run: openssl version
	I0731 22:38:27.258647    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:38:27.288264    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:38:27.295030    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:38:27.306904    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:38:27.327832    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:38:27.359037    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 22:38:27.390179    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 22:38:27.396470    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 22:38:27.408404    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 22:38:27.427323    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 22:38:27.458151    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 22:38:27.489304    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 22:38:27.495886    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 22:38:27.509620    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 22:38:27.530754    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:38:27.562328    9488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:38:27.568520    9488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:38:27.568838    9488 kubeadm.go:934] updating node {m03 172.17.27.253 8443 v1.30.3 docker true true} ...
	I0731 22:38:27.569062    9488 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-207300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.27.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:38:27.569062    9488 kube-vip.go:115] generating kube-vip config ...
	I0731 22:38:27.580713    9488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:38:27.605507    9488 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:38:27.605638    9488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:38:27.620004    9488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:38:27.638642    9488 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 22:38:27.650989    9488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 22:38:27.670987    9488 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 22:38:27.670987    9488 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 22:38:27.670987    9488 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 22:38:27.670987    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:38:27.670987    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:38:27.686681    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:38:27.687887    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:38:27.687887    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:38:27.708794    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 22:38:27.708794    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:38:27.708871    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 22:38:27.708996    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 22:38:27.709044    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 22:38:27.720931    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:38:27.766557    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 22:38:27.766890    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 22:38:29.072388    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 22:38:29.090786    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 22:38:29.124611    9488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:38:29.154800    9488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0731 22:38:29.202884    9488 ssh_runner.go:195] Run: grep 172.17.31.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:38:29.209804    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:38:29.250407    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:29.450088    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:38:29.481244    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:38:29.482337    9488 start.go:317] joinCluster: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.27.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:38:29.482728    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 22:38:29.482844    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:38:31.641168    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:31.641742    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:31.641843    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:34.242737    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:38:34.242819    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:34.243999    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:38:34.456390    9488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9735989s)
	I0731 22:38:34.456503    9488 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.27.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:38:34.456604    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w383sl.wdrouoc2exvlz1v1 --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-207300-m03 --control-plane --apiserver-advertise-address=172.17.27.253 --apiserver-bind-port=8443"
	I0731 22:39:17.459471    9488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w383sl.wdrouoc2exvlz1v1 --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-207300-m03 --control-plane --apiserver-advertise-address=172.17.27.253 --apiserver-bind-port=8443": (43.0022533s)
	I0731 22:39:17.459621    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 22:39:18.339866    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-207300-m03 minikube.k8s.io/updated_at=2024_07_31T22_39_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-207300 minikube.k8s.io/primary=false
	I0731 22:39:18.532242    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-207300-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 22:39:18.678788    9488 start.go:319] duration metric: took 49.1958265s to joinCluster
	I0731 22:39:18.678986    9488 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.17.27.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:39:18.680028    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:39:18.681607    9488 out.go:177] * Verifying Kubernetes components...
	I0731 22:39:18.698250    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:39:19.080298    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:39:19.111942    9488 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:39:19.113598    9488 kapi.go:59] client config for ha-207300: &rest.Config{Host:"https://172.17.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 22:39:19.113652    9488 kubeadm.go:483] Overriding stale ClientConfig host https://172.17.31.254:8443 with https://172.17.21.92:8443
	I0731 22:39:19.115857    9488 node_ready.go:35] waiting up to 6m0s for node "ha-207300-m03" to be "Ready" ...
	I0731 22:39:19.116058    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:19.116130    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:19.116130    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:19.116130    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:19.132504    9488 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 22:39:19.620429    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:19.620429    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:19.620429    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:19.620429    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:19.633133    9488 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 22:39:20.129513    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:20.129513    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:20.129513    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:20.129513    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:20.138192    9488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 22:39:20.621713    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:20.622001    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:20.622001    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:20.622001    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:20.715209    9488 round_trippers.go:574] Response Status: 200 OK in 93 milliseconds
	I0731 22:39:21.127693    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:21.127693    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:21.127693    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:21.127693    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:21.132288    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:21.133294    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:21.617086    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:21.617163    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:21.617163    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:21.617163    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:21.622694    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:22.120096    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:22.120096    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:22.120096    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:22.120096    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:22.252692    9488 round_trippers.go:574] Response Status: 200 OK in 132 milliseconds
	I0731 22:39:22.625952    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:22.625952    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:22.625952    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:22.625952    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:22.633696    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:39:23.130357    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:23.130357    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:23.130357    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:23.130357    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:23.145235    9488 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 22:39:23.146868    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:23.618945    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:23.618945    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:23.619012    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:23.619012    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:23.623268    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:24.124224    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:24.124224    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:24.124224    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:24.124224    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:24.127803    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:24.628644    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:24.628644    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:24.628644    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:24.628644    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:24.633288    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:25.128649    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:25.128795    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:25.128852    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:25.128852    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:25.135749    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:39:25.625537    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:25.625654    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:25.625654    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:25.625654    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:25.629252    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:25.631152    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:26.129280    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:26.129340    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:26.129340    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:26.129340    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:26.132504    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:26.629343    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:26.629343    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:26.629343    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:26.629343    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:26.635204    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:27.117728    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:27.117728    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:27.117809    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:27.117809    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:27.127153    9488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 22:39:27.617010    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:27.617270    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:27.617270    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:27.617270    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:27.629856    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:39:27.631313    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:28.119711    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:28.119816    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:28.119905    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:28.119905    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:28.124748    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:28.618412    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:28.618412    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:28.618412    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:28.618412    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:28.623053    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:29.118745    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:29.118839    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:29.118839    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:29.118839    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:29.123638    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:29.632011    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:29.632011    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:29.632126    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:29.632126    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:29.641429    9488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 22:39:29.642414    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:30.117775    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:30.117775    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:30.117775    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:30.117775    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:30.122413    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:30.617932    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:30.618237    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:30.618237    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:30.618237    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:30.623319    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:31.123580    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:31.123784    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:31.123784    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:31.123784    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:31.128649    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:31.625599    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:31.625788    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:31.625788    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:31.625788    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:31.630589    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:32.128889    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:32.129140    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:32.129140    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:32.129140    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:32.134517    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:32.136207    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:32.626966    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:32.626966    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:32.626966    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:32.627058    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:32.632680    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:33.128463    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:33.128579    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:33.128742    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:33.128742    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:33.133922    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:33.628254    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:33.628365    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:33.628365    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:33.628365    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:33.634665    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:39:34.127530    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:34.127530    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.127530    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.127530    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.132579    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:34.133286    9488 node_ready.go:49] node "ha-207300-m03" has status "Ready":"True"
	I0731 22:39:34.133286    9488 node_ready.go:38] duration metric: took 15.0172036s for node "ha-207300-m03" to be "Ready" ...
	I0731 22:39:34.133286    9488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:39:34.133286    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:34.133286    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.133286    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.133286    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.146011    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:39:34.155953    9488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.156945    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76ftg
	I0731 22:39:34.156945    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.156945    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.156945    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.160664    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.161629    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.161629    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.161629    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.161629    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.165217    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.165217    9488 pod_ready.go:92] pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.166210    9488 pod_ready.go:81] duration metric: took 10.2568ms for pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.166210    9488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.166210    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8xt8f
	I0731 22:39:34.166210    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.166210    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.166210    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.170253    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:34.171158    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.171158    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.171158    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.171158    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.174291    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.175226    9488 pod_ready.go:92] pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.175226    9488 pod_ready.go:81] duration metric: took 9.0159ms for pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.175226    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.175226    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300
	I0731 22:39:34.175226    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.175226    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.175226    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.179818    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:34.181230    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.181288    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.181325    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.181325    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.185632    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.186694    9488 pod_ready.go:92] pod "etcd-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.186694    9488 pod_ready.go:81] duration metric: took 11.4682ms for pod "etcd-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.186694    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.186694    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300-m02
	I0731 22:39:34.186694    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.186694    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.186694    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.191645    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:34.193290    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:34.193376    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.193442    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.193503    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.197138    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.197138    9488 pod_ready.go:92] pod "etcd-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.197138    9488 pod_ready.go:81] duration metric: took 10.4435ms for pod "etcd-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.197138    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.333237    9488 request.go:629] Waited for 136.0974ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300-m03
	I0731 22:39:34.333435    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300-m03
	I0731 22:39:34.333435    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.333435    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.333435    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.341661    9488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 22:39:34.536883    9488 request.go:629] Waited for 194.2327ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:34.537198    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:34.537198    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.537198    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.537198    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.542244    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:34.543619    9488 pod_ready.go:92] pod "etcd-ha-207300-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.543619    9488 pod_ready.go:81] duration metric: took 346.4766ms for pod "etcd-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.543619    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.742359    9488 request.go:629] Waited for 198.5582ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300
	I0731 22:39:34.742561    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300
	I0731 22:39:34.742561    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.742561    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.742629    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.759289    9488 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 22:39:34.933114    9488 request.go:629] Waited for 172.1937ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.933253    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.933253    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.933253    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.933253    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.937836    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:34.939379    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.939439    9488 pod_ready.go:81] duration metric: took 395.8152ms for pod "kube-apiserver-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.939439    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.137645    9488 request.go:629] Waited for 198.0637ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m02
	I0731 22:39:35.138139    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m02
	I0731 22:39:35.138198    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.138198    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.138198    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.143569    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:35.338798    9488 request.go:629] Waited for 193.894ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:35.339097    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:35.339254    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.339254    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.339254    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.346825    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:39:35.347552    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:35.347552    9488 pod_ready.go:81] duration metric: took 408.1079ms for pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.347552    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.541780    9488 request.go:629] Waited for 193.941ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m03
	I0731 22:39:35.541904    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m03
	I0731 22:39:35.541904    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.541904    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.542049    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.548087    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:39:35.731665    9488 request.go:629] Waited for 181.8585ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:35.731796    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:35.731916    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.732007    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.732033    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.736421    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:35.737457    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:35.737457    9488 pod_ready.go:81] duration metric: took 389.8998ms for pod "kube-apiserver-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.737457    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.935888    9488 request.go:629] Waited for 198.1885ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300
	I0731 22:39:35.936120    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300
	I0731 22:39:35.936120    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.936120    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.936120    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.940271    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:36.137971    9488 request.go:629] Waited for 195.6006ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:36.137971    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:36.137971    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.137971    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.138187    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.143854    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:36.144642    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:36.144642    9488 pod_ready.go:81] duration metric: took 407.06ms for pod "kube-controller-manager-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.144642    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.341284    9488 request.go:629] Waited for 196.5075ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m02
	I0731 22:39:36.341455    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m02
	I0731 22:39:36.341455    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.341501    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.341501    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.346279    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:36.528962    9488 request.go:629] Waited for 181.3838ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:36.528962    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:36.528962    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.528962    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.528962    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.533545    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:36.534765    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:36.534765    9488 pod_ready.go:81] duration metric: took 390.1178ms for pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.534765    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.732225    9488 request.go:629] Waited for 197.4573ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m03
	I0731 22:39:36.732581    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m03
	I0731 22:39:36.732581    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.732581    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.732671    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.738437    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:36.937609    9488 request.go:629] Waited for 197.9142ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:36.937609    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:36.937609    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.937609    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.937609    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.946603    9488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 22:39:36.947332    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:36.947332    9488 pod_ready.go:81] duration metric: took 412.5613ms for pod "kube-controller-manager-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.947332    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f56f" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.140292    9488 request.go:629] Waited for 192.7351ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f56f
	I0731 22:39:37.140292    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f56f
	I0731 22:39:37.140292    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.140292    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.140542    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.145859    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:37.328398    9488 request.go:629] Waited for 181.4529ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:37.328714    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:37.328833    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.328833    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.328896    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.340618    9488 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 22:39:37.341571    9488 pod_ready.go:92] pod "kube-proxy-2f56f" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:37.341571    9488 pod_ready.go:81] duration metric: took 394.234ms for pod "kube-proxy-2f56f" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.341571    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-htmnf" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.531630    9488 request.go:629] Waited for 189.8312ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-htmnf
	I0731 22:39:37.531884    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-htmnf
	I0731 22:39:37.531884    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.531884    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.531884    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.537303    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:37.733756    9488 request.go:629] Waited for 195.3274ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:37.733863    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:37.733863    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.734094    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.734094    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.738241    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:37.739151    9488 pod_ready.go:92] pod "kube-proxy-htmnf" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:37.739151    9488 pod_ready.go:81] duration metric: took 397.5752ms for pod "kube-proxy-htmnf" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.739151    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5gbs" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.936557    9488 request.go:629] Waited for 197.4037ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5gbs
	I0731 22:39:37.936838    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5gbs
	I0731 22:39:37.936838    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.936838    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.936838    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.941422    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:38.137561    9488 request.go:629] Waited for 194.978ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:38.137835    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:38.137835    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.137835    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.137835    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.145212    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:39:38.146483    9488 pod_ready.go:92] pod "kube-proxy-z5gbs" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:38.146586    9488 pod_ready.go:81] duration metric: took 407.4303ms for pod "kube-proxy-z5gbs" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.146586    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.341175    9488 request.go:629] Waited for 194.0369ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300
	I0731 22:39:38.341283    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300
	I0731 22:39:38.341283    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.341283    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.341283    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.346235    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:38.528846    9488 request.go:629] Waited for 180.9738ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:38.528846    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:38.529143    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.529143    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.529143    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.534917    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:38.535572    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:38.535572    9488 pod_ready.go:81] duration metric: took 388.9805ms for pod "kube-scheduler-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.535572    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.730747    9488 request.go:629] Waited for 194.0529ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m02
	I0731 22:39:38.730747    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m02
	I0731 22:39:38.730747    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.730747    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.730747    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.735375    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:38.935222    9488 request.go:629] Waited for 197.6597ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:38.935522    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:38.935635    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.935635    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.935635    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.939916    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:38.940983    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:38.941108    9488 pod_ready.go:81] duration metric: took 405.5311ms for pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.941108    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:39.137252    9488 request.go:629] Waited for 196.0368ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m03
	I0731 22:39:39.137662    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m03
	I0731 22:39:39.137662    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.137662    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.137865    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.142197    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:39.339962    9488 request.go:629] Waited for 195.0352ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:39.340078    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:39.340078    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.340167    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.340238    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.346612    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:39:39.347813    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:39.347813    9488 pod_ready.go:81] duration metric: took 406.6999ms for pod "kube-scheduler-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:39.347813    9488 pod_ready.go:38] duration metric: took 5.2144606s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:39:39.347813    9488 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:39:39.359568    9488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:39:39.387357    9488 api_server.go:72] duration metric: took 20.7079866s to wait for apiserver process to appear ...
	I0731 22:39:39.387357    9488 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:39:39.387465    9488 api_server.go:253] Checking apiserver healthz at https://172.17.21.92:8443/healthz ...
	I0731 22:39:39.394573    9488 api_server.go:279] https://172.17.21.92:8443/healthz returned 200:
	ok
	I0731 22:39:39.394573    9488 round_trippers.go:463] GET https://172.17.21.92:8443/version
	I0731 22:39:39.394573    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.394573    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.398317    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.399123    9488 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 22:39:39.400201    9488 api_server.go:141] control plane version: v1.30.3
	I0731 22:39:39.400201    9488 api_server.go:131] duration metric: took 12.8437ms to wait for apiserver health ...
	I0731 22:39:39.400201    9488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:39:39.541809    9488 request.go:629] Waited for 141.3817ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:39.541985    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:39.541985    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.541985    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.541985    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.552569    9488 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 22:39:39.562550    9488 system_pods.go:59] 24 kube-system pods found
	I0731 22:39:39.562597    9488 system_pods.go:61] "coredns-7db6d8ff4d-76ftg" [bf92d1a7-935b-4c9a-b8bd-30ae3361df12] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "coredns-7db6d8ff4d-8xt8f" [df01f8c6-b706-4225-8470-1fbdf9828343] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "etcd-ha-207300" [e8d252ff-ddb3-4c99-a761-31c9c9f1b878] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "etcd-ha-207300-m02" [c3906bb1-a736-42d5-a6c5-2b2011e96095] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "etcd-ha-207300-m03" [93daa58c-b243-42a4-bb99-041cbc686b58] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kindnet-kz4x6" [7a9f0cc3-761c-43dc-8762-1adaff90efa2] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kindnet-lmdqz" [9c96c91b-0a25-4cfd-be3a-5a843e9bed74] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kindnet-x9ppc" [14752388-ec95-431d-80c6-86e6c4fd1c14] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-apiserver-ha-207300" [eb0e0730-5fd4-41b6-8126-ab6e97ef3838] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-apiserver-ha-207300-m02" [ed634f14-62de-4ec5-af02-8fbcb10ea3bf] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-apiserver-ha-207300-m03" [45d4ac2d-f672-4bce-8d5a-f5d7b246b58c] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-controller-manager-ha-207300" [42d3dea7-1f64-4c4e-b700-eafb129dc8de] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-controller-manager-ha-207300-m02" [c630fba1-2a98-4176-aa73-c4dfc5602505] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-controller-manager-ha-207300-m03" [88e8a610-6178-4caf-9860-0a24b17386f5] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-proxy-2f56f" [045dbfdd-d6ef-4224-a868-0a71d78c2345] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-proxy-htmnf" [e5ac19af-40fc-448c-8c47-45bcff41ad20] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-proxy-z5gbs" [156fcdf2-9a4c-4f9b-bf4f-dfa2a48e3cbc] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-scheduler-ha-207300" [29ce7842-7630-492e-adcc-1cb0837afe4d] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-scheduler-ha-207300-m02" [5ce1215b-baaf-42e2-be58-4b8850ca3e9d] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-scheduler-ha-207300-m03" [857cc362-2f33-4e60-a7ed-bb207cd5b4b7] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-vip-ha-207300" [f8d305a0-e7ef-4336-9a79-0052678c97cd] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-vip-ha-207300-m02" [47e8411b-e8ae-4561-95a9-b2957d56505b] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-vip-ha-207300-m03" [d257808d-8954-4ca1-b3d7-b81468bf17df] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "storage-provisioner" [47da608c-5f75-43ea-8403-56b00ff33fd1] Running
	I0731 22:39:39.562597    9488 system_pods.go:74] duration metric: took 162.3937ms to wait for pod list to return data ...
	I0731 22:39:39.562597    9488 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:39:39.729798    9488 request.go:629] Waited for 167.0073ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:39:39.729798    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:39:39.729798    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.729798    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.729798    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.735635    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:39.735971    9488 default_sa.go:45] found service account: "default"
	I0731 22:39:39.735971    9488 default_sa.go:55] duration metric: took 173.3722ms for default service account to be created ...
	I0731 22:39:39.735971    9488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:39:39.933172    9488 request.go:629] Waited for 196.9096ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:39.933309    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:39.933309    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.933309    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.933309    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.944637    9488 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 22:39:39.954327    9488 system_pods.go:86] 24 kube-system pods found
	I0731 22:39:39.954327    9488 system_pods.go:89] "coredns-7db6d8ff4d-76ftg" [bf92d1a7-935b-4c9a-b8bd-30ae3361df12] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "coredns-7db6d8ff4d-8xt8f" [df01f8c6-b706-4225-8470-1fbdf9828343] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "etcd-ha-207300" [e8d252ff-ddb3-4c99-a761-31c9c9f1b878] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "etcd-ha-207300-m02" [c3906bb1-a736-42d5-a6c5-2b2011e96095] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "etcd-ha-207300-m03" [93daa58c-b243-42a4-bb99-041cbc686b58] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kindnet-kz4x6" [7a9f0cc3-761c-43dc-8762-1adaff90efa2] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kindnet-lmdqz" [9c96c91b-0a25-4cfd-be3a-5a843e9bed74] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kindnet-x9ppc" [14752388-ec95-431d-80c6-86e6c4fd1c14] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-apiserver-ha-207300" [eb0e0730-5fd4-41b6-8126-ab6e97ef3838] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-apiserver-ha-207300-m02" [ed634f14-62de-4ec5-af02-8fbcb10ea3bf] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-apiserver-ha-207300-m03" [45d4ac2d-f672-4bce-8d5a-f5d7b246b58c] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-controller-manager-ha-207300" [42d3dea7-1f64-4c4e-b700-eafb129dc8de] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-controller-manager-ha-207300-m02" [c630fba1-2a98-4176-aa73-c4dfc5602505] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-controller-manager-ha-207300-m03" [88e8a610-6178-4caf-9860-0a24b17386f5] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-proxy-2f56f" [045dbfdd-d6ef-4224-a868-0a71d78c2345] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-proxy-htmnf" [e5ac19af-40fc-448c-8c47-45bcff41ad20] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-proxy-z5gbs" [156fcdf2-9a4c-4f9b-bf4f-dfa2a48e3cbc] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-scheduler-ha-207300" [29ce7842-7630-492e-adcc-1cb0837afe4d] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-scheduler-ha-207300-m02" [5ce1215b-baaf-42e2-be58-4b8850ca3e9d] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-scheduler-ha-207300-m03" [857cc362-2f33-4e60-a7ed-bb207cd5b4b7] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-vip-ha-207300" [f8d305a0-e7ef-4336-9a79-0052678c97cd] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-vip-ha-207300-m02" [47e8411b-e8ae-4561-95a9-b2957d56505b] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-vip-ha-207300-m03" [d257808d-8954-4ca1-b3d7-b81468bf17df] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "storage-provisioner" [47da608c-5f75-43ea-8403-56b00ff33fd1] Running
	I0731 22:39:39.954857    9488 system_pods.go:126] duration metric: took 218.8064ms to wait for k8s-apps to be running ...
	I0731 22:39:39.954857    9488 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:39:39.964348    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:39:39.996098    9488 system_svc.go:56] duration metric: took 41.2407ms WaitForService to wait for kubelet
	I0731 22:39:39.996098    9488 kubeadm.go:582] duration metric: took 21.3167196s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:39:39.996098    9488 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:39:40.136596    9488 request.go:629] Waited for 140.116ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes
	I0731 22:39:40.136804    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes
	I0731 22:39:40.136804    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:40.136804    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:40.136804    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:40.141855    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:40.144256    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:39:40.144256    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:39:40.144328    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:39:40.144328    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:39:40.144328    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:39:40.144328    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:39:40.144328    9488 node_conditions.go:105] duration metric: took 148.2275ms to run NodePressure ...
	I0731 22:39:40.144328    9488 start.go:241] waiting for startup goroutines ...
	I0731 22:39:40.144394    9488 start.go:255] writing updated cluster config ...
	I0731 22:39:40.156435    9488 ssh_runner.go:195] Run: rm -f paused
	I0731 22:39:40.305852    9488 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 22:39:40.309434    9488 out.go:177] * Done! kubectl is now configured to use "ha-207300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 31 22:31:56 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:31:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/383ca7ed078722c5076713b3759129562417aca4629178d90d94bf59407c308a/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 22:31:56 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:31:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ec30e750851512648397310d12de83abfcf8dfec70209ed81809a468cb758c0/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 22:31:56 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:31:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/685a24f7e87194d87281943fc543bcd38c32457da023a59b9272abcf739ddc96/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 22:31:56 ha-207300 dockerd[1435]: time="2024-07-31T22:31:56.928183289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:31:56 ha-207300 dockerd[1435]: time="2024-07-31T22:31:56.933141822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:31:56 ha-207300 dockerd[1435]: time="2024-07-31T22:31:56.933263823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:56 ha-207300 dockerd[1435]: time="2024-07-31T22:31:56.933434824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.366302985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.366509985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.366532285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.368686991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.380440222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.381804125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.382035826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.382477027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:40:19 ha-207300 dockerd[1435]: time="2024-07-31T22:40:19.059737265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:40:19 ha-207300 dockerd[1435]: time="2024-07-31T22:40:19.060041767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:40:19 ha-207300 dockerd[1435]: time="2024-07-31T22:40:19.060179167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:40:19 ha-207300 dockerd[1435]: time="2024-07-31T22:40:19.060527369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:40:19 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:40:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/038d12e18eb5082f79f6a7f22b64a43502a4b0b9609b391d78911bb2dba52ec0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 22:40:20 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:40:20Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 31 22:40:20 ha-207300 dockerd[1435]: time="2024-07-31T22:40:20.973859815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:40:20 ha-207300 dockerd[1435]: time="2024-07-31T22:40:20.974011117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:40:20 ha-207300 dockerd[1435]: time="2024-07-31T22:40:20.974032017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:40:20 ha-207300 dockerd[1435]: time="2024-07-31T22:40:20.974723426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	39b3a643e1150       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   038d12e18eb50       busybox-fc5497c4f-dmsjq
	ef2b9187dc7ad       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   685a24f7e8719       coredns-7db6d8ff4d-76ftg
	aee85563f6da1       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   5ec30e7508515       coredns-7db6d8ff4d-8xt8f
	9a35498ccbc6f       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   383ca7ed07872       storage-provisioner
	1aa0807dc075f       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              9 minutes ago        Running             kindnet-cni               0                   f2cb14db0f72d       kindnet-lmdqz
	76a17591c6fac       55bb025d2cfa5                                                                                         9 minutes ago        Running             kube-proxy                0                   c618b03095696       kube-proxy-z5gbs
	2994dd0871403       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   b8f8ab975dd56       kube-vip-ha-207300
	23266576b86cf       76932a3b37d7e                                                                                         10 minutes ago       Running             kube-controller-manager   0                   f41b2b390e4a3       kube-controller-manager-ha-207300
	ca42a9c8944b7       1f6d574d502f3                                                                                         10 minutes ago       Running             kube-apiserver            0                   43cc4ea2f8d23       kube-apiserver-ha-207300
	72d884b0f8834       3edc18e7b7672                                                                                         10 minutes ago       Running             kube-scheduler            0                   0b7f062808ba1       kube-scheduler-ha-207300
	f98bfdd5c1907       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   5451ccaff78bc       etcd-ha-207300
	
	
	==> coredns [aee85563f6da] <==
	[INFO] 10.244.1.2:60050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166802s
	[INFO] 10.244.0.4:44272 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158902s
	[INFO] 10.244.0.4:55167 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137902s
	[INFO] 10.244.0.4:55952 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146902s
	[INFO] 10.244.0.4:35327 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000252103s
	[INFO] 10.244.0.4:43599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126001s
	[INFO] 10.244.2.2:60189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115702s
	[INFO] 10.244.2.2:49019 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134702s
	[INFO] 10.244.2.2:43833 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081301s
	[INFO] 10.244.2.2:37834 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130701s
	[INFO] 10.244.1.2:37113 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167603s
	[INFO] 10.244.1.2:48182 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123902s
	[INFO] 10.244.1.2:36265 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012707856s
	[INFO] 10.244.1.2:54993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129802s
	[INFO] 10.244.1.2:39553 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165102s
	[INFO] 10.244.1.2:37452 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065401s
	[INFO] 10.244.0.4:55954 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090201s
	[INFO] 10.244.1.2:49247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172202s
	[INFO] 10.244.1.2:58188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082401s
	[INFO] 10.244.1.2:45588 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082901s
	[INFO] 10.244.0.4:39075 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123501s
	[INFO] 10.244.0.4:40567 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000151102s
	[INFO] 10.244.2.2:56575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120602s
	[INFO] 10.244.2.2:46069 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000059001s
	[INFO] 10.244.1.2:57058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239503s
	
	
	==> coredns [ef2b9187dc7a] <==
	[INFO] 10.244.1.2:36380 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.001019912s
	[INFO] 10.244.0.4:45601 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032407098s
	[INFO] 10.244.0.4:56553 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234803s
	[INFO] 10.244.0.4:41356 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.005240664s
	[INFO] 10.244.2.2:52594 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073601s
	[INFO] 10.244.2.2:51267 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129201s
	[INFO] 10.244.2.2:42341 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000051701s
	[INFO] 10.244.2.2:46960 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166702s
	[INFO] 10.244.1.2:45426 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000143802s
	[INFO] 10.244.1.2:47990 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152101s
	[INFO] 10.244.0.4:43210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179602s
	[INFO] 10.244.0.4:59126 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000236803s
	[INFO] 10.244.0.4:46953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000332004s
	[INFO] 10.244.2.2:47159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189603s
	[INFO] 10.244.2.2:58078 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000628s
	[INFO] 10.244.2.2:50910 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000385904s
	[INFO] 10.244.2.2:45683 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079501s
	[INFO] 10.244.1.2:42810 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149602s
	[INFO] 10.244.0.4:54879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000376804s
	[INFO] 10.244.0.4:40853 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295304s
	[INFO] 10.244.2.2:48750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129201s
	[INFO] 10.244.2.2:45748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106901s
	[INFO] 10.244.1.2:34395 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177802s
	[INFO] 10.244.1.2:43660 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000535s
	[INFO] 10.244.1.2:42514 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000090301s
	
	
	==> describe nodes <==
	Name:               ha-207300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-207300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-207300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T22_31_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:31:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-207300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:41:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:40:22 +0000   Wed, 31 Jul 2024 22:31:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:40:22 +0000   Wed, 31 Jul 2024 22:31:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:40:22 +0000   Wed, 31 Jul 2024 22:31:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:40:22 +0000   Wed, 31 Jul 2024 22:31:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.21.92
	  Hostname:    ha-207300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a148e76579a04c519b4c19b001798bd3
	  System UUID:                960376a8-fd40-614d-a948-5e6e5b08529e
	  Boot ID:                    6fcae202-face-4e62-bb79-1aea3b1cf7da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dmsjq              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 coredns-7db6d8ff4d-76ftg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m49s
	  kube-system                 coredns-7db6d8ff4d-8xt8f             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m49s
	  kube-system                 etcd-ha-207300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-lmdqz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m49s
	  kube-system                 kube-apiserver-ha-207300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-207300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-z5gbs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m49s
	  kube-system                 kube-scheduler-ha-207300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-207300                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m47s  kube-proxy       
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-207300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node ha-207300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node ha-207300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m50s  node-controller  Node ha-207300 event: Registered Node ha-207300 in Controller
	  Normal  NodeReady                9m29s  kubelet          Node ha-207300 status is now: NodeReady
	  Normal  RegisteredNode           5m48s  node-controller  Node ha-207300 event: Registered Node ha-207300 in Controller
	  Normal  RegisteredNode           110s   node-controller  Node ha-207300 event: Registered Node ha-207300 in Controller
	
	
	Name:               ha-207300-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-207300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-207300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_35_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:35:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-207300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:41:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:40:51 +0000   Wed, 31 Jul 2024 22:35:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:40:51 +0000   Wed, 31 Jul 2024 22:35:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:40:51 +0000   Wed, 31 Jul 2024 22:35:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:40:51 +0000   Wed, 31 Jul 2024 22:35:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.28.136
	  Hostname:    ha-207300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb62ce90f3be47b4a23f24eb61648c2a
	  System UUID:                ecc49eb4-b0e3-e647-bbee-85d7ecde0688
	  Boot ID:                    7e79ea17-4642-4c4c-acdb-eb562d08a26f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-x7dnz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-207300-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-kz4x6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m9s
	  kube-system                 kube-apiserver-ha-207300-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-controller-manager-ha-207300-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-proxy-htmnf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-scheduler-ha-207300-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-vip-ha-207300-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node ha-207300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node ha-207300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x7 over 6m9s)  kubelet          Node ha-207300-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m5s                 node-controller  Node ha-207300-m02 event: Registered Node ha-207300-m02 in Controller
	  Normal  RegisteredNode           5m48s                node-controller  Node ha-207300-m02 event: Registered Node ha-207300-m02 in Controller
	  Normal  RegisteredNode           110s                 node-controller  Node ha-207300-m02 event: Registered Node ha-207300-m02 in Controller
	
	
	Name:               ha-207300-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-207300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-207300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_39_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:39:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-207300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:41:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:40:42 +0000   Wed, 31 Jul 2024 22:39:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:40:42 +0000   Wed, 31 Jul 2024 22:39:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:40:42 +0000   Wed, 31 Jul 2024 22:39:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:40:42 +0000   Wed, 31 Jul 2024 22:39:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.27.253
	  Hostname:    ha-207300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3aded06a5a224beeabb0709882901395
	  System UUID:                79b9d085-c6d3-e54b-af08-0e582d6afc79
	  Boot ID:                    06948a4e-8542-411e-9321-238514bfff17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f8sql                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-207300-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m11s
	  kube-system                 kindnet-x9ppc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m14s
	  kube-system                 kube-apiserver-ha-207300-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-controller-manager-ha-207300-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-proxy-2f56f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-scheduler-ha-207300-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-vip-ha-207300-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m9s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m15s)  kubelet          Node ha-207300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m15s)  kubelet          Node ha-207300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m15s)  kubelet          Node ha-207300-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m13s                  node-controller  Node ha-207300-m03 event: Registered Node ha-207300-m03 in Controller
	  Normal  RegisteredNode           2m10s                  node-controller  Node ha-207300-m03 event: Registered Node ha-207300-m03 in Controller
	  Normal  RegisteredNode           110s                   node-controller  Node ha-207300-m03 event: Registered Node ha-207300-m03 in Controller
	
	
	==> dmesg <==
	[  +6.795448] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000027] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 22:30] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.169389] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +29.815796] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +0.097880] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.514174] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	[  +0.168230] systemd-fstab-generator[1052]: Ignoring "noauto" option for root device
	[  +0.215055] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +2.807586] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.191062] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.183350] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.269279] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[Jul31 22:31] systemd-fstab-generator[1420]: Ignoring "noauto" option for root device
	[  +0.097061] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.784760] systemd-fstab-generator[1678]: Ignoring "noauto" option for root device
	[  +7.388087] systemd-fstab-generator[1894]: Ignoring "noauto" option for root device
	[  +0.088756] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.219115] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.315294] systemd-fstab-generator[2388]: Ignoring "noauto" option for root device
	[ +15.143635] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.621970] kauditd_printk_skb: 29 callbacks suppressed
	[Jul31 22:35] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 22:38] hrtimer: interrupt took 3519619 ns
	
	
	==> etcd [f98bfdd5c190] <==
	{"level":"info","ts":"2024-07-31T22:39:20.614825Z","caller":"traceutil/trace.go:171","msg":"trace[1157483642] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"222.668433ms","start":"2024-07-31T22:39:20.392136Z","end":"2024-07-31T22:39:20.614804Z","steps":["trace[1157483642] 'process raft request'  (duration: 222.549532ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:39:20.623561Z","caller":"traceutil/trace.go:171","msg":"trace[1098774949] linearizableReadLoop","detail":"{readStateIndex:1726; appliedIndex:1727; }","duration":"176.011274ms","start":"2024-07-31T22:39:20.447537Z","end":"2024-07-31T22:39:20.623548Z","steps":["trace[1098774949] 'read index received'  (duration: 176.006474ms)","trace[1098774949] 'applied index is now lower than readState.Index'  (duration: 4.1µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T22:39:20.623795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.240675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-07-31T22:39:20.623999Z","caller":"traceutil/trace.go:171","msg":"trace[1871002847] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1551; }","duration":"176.491277ms","start":"2024-07-31T22:39:20.447497Z","end":"2024-07-31T22:39:20.623988Z","steps":["trace[1871002847] 'agreement among raft nodes before linearized reading'  (duration: 176.184675ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:39:20.730479Z","caller":"traceutil/trace.go:171","msg":"trace[1217526261] transaction","detail":"{read_only:false; response_revision:1552; number_of_response:1; }","duration":"298.924655ms","start":"2024-07-31T22:39:20.431508Z","end":"2024-07-31T22:39:20.730433Z","steps":["trace[1217526261] 'process raft request'  (duration: 211.023768ms)","trace[1217526261] 'compare'  (duration: 87.790086ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T22:39:20.731164Z","caller":"traceutil/trace.go:171","msg":"trace[1313557189] transaction","detail":"{read_only:false; response_revision:1554; number_of_response:1; }","duration":"101.821364ms","start":"2024-07-31T22:39:20.629309Z","end":"2024-07-31T22:39:20.73113Z","steps":["trace[1313557189] 'process raft request'  (duration: 101.681063ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:39:22.040102Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c6373ad9d5cfa5bb","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"98.919754ms"}
	{"level":"warn","ts":"2024-07-31T22:39:22.040297Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a278d90fe05b03d6","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"99.137855ms"}
	{"level":"info","ts":"2024-07-31T22:39:22.04103Z","caller":"traceutil/trace.go:171","msg":"trace[1079586983] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"299.503957ms","start":"2024-07-31T22:39:21.741511Z","end":"2024-07-31T22:39:22.041015Z","steps":["trace[1079586983] 'process raft request'  (duration: 299.084755ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:39:22.042451Z","caller":"traceutil/trace.go:171","msg":"trace[134105701] linearizableReadLoop","detail":"{readStateIndex:1737; appliedIndex:1738; }","duration":"262.04875ms","start":"2024-07-31T22:39:21.780393Z","end":"2024-07-31T22:39:22.042441Z","steps":["trace[134105701] 'read index received'  (duration: 262.04545ms)","trace[134105701] 'applied index is now lower than readState.Index'  (duration: 2.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T22:39:22.04251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.10835ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T22:39:22.042761Z","caller":"traceutil/trace.go:171","msg":"trace[960132748] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1560; }","duration":"262.380452ms","start":"2024-07-31T22:39:21.780366Z","end":"2024-07-31T22:39:22.042746Z","steps":["trace[960132748] 'agreement among raft nodes before linearized reading'  (duration: 262.10805ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:39:22.270413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.812701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-207300-m03\" ","response":"range_response_count:1 size:4442"}
	{"level":"info","ts":"2024-07-31T22:39:22.271073Z","caller":"traceutil/trace.go:171","msg":"trace[821949633] range","detail":"{range_begin:/registry/minions/ha-207300-m03; range_end:; response_count:1; response_revision:1560; }","duration":"127.522906ms","start":"2024-07-31T22:39:22.143534Z","end":"2024-07-31T22:39:22.271057Z","steps":["trace[821949633] 'range keys from in-memory index tree'  (duration: 125.663196ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:40:18.687098Z","caller":"traceutil/trace.go:171","msg":"trace[158337590] linearizableReadLoop","detail":"{readStateIndex:2046; appliedIndex:2046; }","duration":"138.68146ms","start":"2024-07-31T22:40:18.548398Z","end":"2024-07-31T22:40:18.68708Z","steps":["trace[158337590] 'read index received'  (duration: 138.67556ms)","trace[158337590] 'applied index is now lower than readState.Index'  (duration: 4.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T22:40:18.687506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.013366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2024-07-31T22:40:18.687641Z","caller":"traceutil/trace.go:171","msg":"trace[1938642556] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:1793; }","duration":"158.178567ms","start":"2024-07-31T22:40:18.529451Z","end":"2024-07-31T22:40:18.68763Z","steps":["trace[1938642556] 'agreement among raft nodes before linearized reading'  (duration: 157.770365ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:40:18.707344Z","caller":"traceutil/trace.go:171","msg":"trace[1235897662] transaction","detail":"{read_only:false; response_revision:1796; number_of_response:1; }","duration":"135.114441ms","start":"2024-07-31T22:40:18.572215Z","end":"2024-07-31T22:40:18.70733Z","steps":["trace[1235897662] 'process raft request'  (duration: 134.87534ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:40:18.724936Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.601086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-07-31T22:40:18.724998Z","caller":"traceutil/trace.go:171","msg":"trace[1563975002] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:1796; }","duration":"161.733987ms","start":"2024-07-31T22:40:18.563253Z","end":"2024-07-31T22:40:18.724987Z","steps":["trace[1563975002] 'agreement among raft nodes before linearized reading'  (duration: 161.587986ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:40:18.72584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.536142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/busybox-fc5497c4f\" ","response":"range_response_count:1 size:2013"}
	{"level":"info","ts":"2024-07-31T22:40:18.72606Z","caller":"traceutil/trace.go:171","msg":"trace[2030656381] range","detail":"{range_begin:/registry/replicasets/default/busybox-fc5497c4f; range_end:; response_count:1; response_revision:1796; }","duration":"153.775044ms","start":"2024-07-31T22:40:18.572275Z","end":"2024-07-31T22:40:18.72605Z","steps":["trace[2030656381] 'agreement among raft nodes before linearized reading'  (duration: 153.214641ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:41:16.222621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1054}
	{"level":"info","ts":"2024-07-31T22:41:16.287726Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1054,"took":"62.383434ms","hash":3199964446,"current-db-size-bytes":3727360,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2129920,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-31T22:41:16.288252Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3199964446,"revision":1054,"compact-revision":-1}
	
	
	==> kernel <==
	 22:41:24 up 12 min,  0 users,  load average: 0.93, 0.59, 0.35
	Linux ha-207300 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1aa0807dc075] <==
	I0731 22:40:43.977451       1 main.go:322] Node ha-207300-m03 has CIDR [10.244.2.0/24] 
	I0731 22:40:53.977150       1 main.go:295] Handling node with IPs: map[172.17.21.92:{}]
	I0731 22:40:53.977200       1 main.go:299] handling current node
	I0731 22:40:53.977218       1 main.go:295] Handling node with IPs: map[172.17.28.136:{}]
	I0731 22:40:53.977225       1 main.go:322] Node ha-207300-m02 has CIDR [10.244.1.0/24] 
	I0731 22:40:53.977625       1 main.go:295] Handling node with IPs: map[172.17.27.253:{}]
	I0731 22:40:53.977756       1 main.go:322] Node ha-207300-m03 has CIDR [10.244.2.0/24] 
	I0731 22:41:03.982433       1 main.go:295] Handling node with IPs: map[172.17.21.92:{}]
	I0731 22:41:03.982534       1 main.go:299] handling current node
	I0731 22:41:03.982554       1 main.go:295] Handling node with IPs: map[172.17.28.136:{}]
	I0731 22:41:03.982562       1 main.go:322] Node ha-207300-m02 has CIDR [10.244.1.0/24] 
	I0731 22:41:03.982788       1 main.go:295] Handling node with IPs: map[172.17.27.253:{}]
	I0731 22:41:03.982954       1 main.go:322] Node ha-207300-m03 has CIDR [10.244.2.0/24] 
	I0731 22:41:13.982767       1 main.go:295] Handling node with IPs: map[172.17.28.136:{}]
	I0731 22:41:13.983181       1 main.go:322] Node ha-207300-m02 has CIDR [10.244.1.0/24] 
	I0731 22:41:13.983440       1 main.go:295] Handling node with IPs: map[172.17.27.253:{}]
	I0731 22:41:13.983620       1 main.go:322] Node ha-207300-m03 has CIDR [10.244.2.0/24] 
	I0731 22:41:13.984102       1 main.go:295] Handling node with IPs: map[172.17.21.92:{}]
	I0731 22:41:13.984147       1 main.go:299] handling current node
	I0731 22:41:23.977317       1 main.go:295] Handling node with IPs: map[172.17.28.136:{}]
	I0731 22:41:23.977450       1 main.go:322] Node ha-207300-m02 has CIDR [10.244.1.0/24] 
	I0731 22:41:23.978010       1 main.go:295] Handling node with IPs: map[172.17.27.253:{}]
	I0731 22:41:23.978028       1 main.go:322] Node ha-207300-m03 has CIDR [10.244.2.0/24] 
	I0731 22:41:23.978095       1 main.go:295] Handling node with IPs: map[172.17.21.92:{}]
	I0731 22:41:23.978162       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ca42a9c8944b] <==
	I0731 22:31:35.463062       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0731 22:34:48.020001       1 trace.go:236] Trace[1810281688]: "Update" accept:application/json, */*,audit-id:f04ae927-3be2-4e7c-afe1-08fd7b44ca4a,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (31-Jul-2024 22:34:47.505) (total time: 514ms):
	Trace[1810281688]: ["GuaranteedUpdate etcd3" audit-id:f04ae927-3be2-4e7c-afe1-08fd7b44ca4a,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 513ms (22:34:47.506)
	Trace[1810281688]:  ---"Txn call completed" 512ms (22:34:48.019)]
	Trace[1810281688]: [514.242276ms] [514.242276ms] END
	E0731 22:39:10.860603       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0731 22:39:10.860736       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0731 22:39:10.862498       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0731 22:39:10.862807       1 timeout.go:142] post-timeout activity - time-elapsed: 2.133912ms, PATCH "/api/v1/namespaces/default/events/ha-207300-m03.17e76d4aab696f8d" result: <nil>
	E0731 22:39:10.884110       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 23.350129ms, panicked: false, err: context canceled, panic-reason: <nil>
	E0731 22:40:24.863994       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54458: use of closed network connection
	E0731 22:40:26.474163       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54460: use of closed network connection
	E0731 22:40:27.097661       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54462: use of closed network connection
	E0731 22:40:27.666208       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54464: use of closed network connection
	E0731 22:40:28.195403       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54466: use of closed network connection
	E0731 22:40:28.754879       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54468: use of closed network connection
	E0731 22:40:29.313353       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54470: use of closed network connection
	E0731 22:40:29.858863       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54472: use of closed network connection
	E0731 22:40:30.411250       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54474: use of closed network connection
	E0731 22:40:31.401846       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54477: use of closed network connection
	E0731 22:40:41.946824       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54479: use of closed network connection
	E0731 22:40:42.481604       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54484: use of closed network connection
	E0731 22:40:53.012848       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54487: use of closed network connection
	E0731 22:40:53.510279       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54490: use of closed network connection
	E0731 22:41:04.056959       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54492: use of closed network connection
	
	
	==> kube-controller-manager [23266576b86c] <==
	I0731 22:31:58.651445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.4µs"
	I0731 22:31:59.736608       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0731 22:35:15.721239       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-207300-m02\" does not exist"
	I0731 22:35:15.740240       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-207300-m02" podCIDRs=["10.244.1.0/24"]
	I0731 22:35:19.779337       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-207300-m02"
	I0731 22:39:10.046799       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-207300-m03\" does not exist"
	I0731 22:39:10.077201       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-207300-m03" podCIDRs=["10.244.2.0/24"]
	I0731 22:39:14.885189       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-207300-m03"
	I0731 22:40:17.858337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="211.080858ms"
	I0731 22:40:17.899017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.631023ms"
	I0731 22:40:18.026266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.074597ms"
	I0731 22:40:18.309325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="282.922651ms"
	I0731 22:40:18.841373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="531.759416ms"
	E0731 22:40:18.841453       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 22:40:18.841534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.9µs"
	I0731 22:40:18.847364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.5µs"
	I0731 22:40:19.049343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.757032ms"
	I0731 22:40:19.049446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.1µs"
	I0731 22:40:19.966028       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.001µs"
	I0731 22:40:21.084401       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="131.418524ms"
	I0731 22:40:21.085052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.701µs"
	I0731 22:40:21.637402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.493145ms"
	I0731 22:40:21.637555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.5µs"
	I0731 22:40:21.853997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.443764ms"
	I0731 22:40:21.854157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.1µs"
	
	
	==> kube-proxy [76a17591c6fa] <==
	I0731 22:31:36.765718       1 server_linux.go:69] "Using iptables proxy"
	I0731 22:31:36.788887       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.21.92"]
	I0731 22:31:36.871466       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 22:31:36.871591       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 22:31:36.871616       1 server_linux.go:165] "Using iptables Proxier"
	I0731 22:31:36.876289       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 22:31:36.877211       1 server.go:872] "Version info" version="v1.30.3"
	I0731 22:31:36.877243       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:31:36.878788       1 config.go:192] "Starting service config controller"
	I0731 22:31:36.878948       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 22:31:36.879378       1 config.go:101] "Starting endpoint slice config controller"
	I0731 22:31:36.879466       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 22:31:36.880475       1 config.go:319] "Starting node config controller"
	I0731 22:31:36.880510       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 22:31:36.980656       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 22:31:36.980672       1 shared_informer.go:320] Caches are synced for node config
	I0731 22:31:36.980711       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [72d884b0f883] <==
	W0731 22:31:19.437090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 22:31:19.437299       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 22:31:19.490439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 22:31:19.490491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 22:31:19.511747       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 22:31:19.511989       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 22:31:19.572668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 22:31:19.573725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 22:31:19.704448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:31:19.704593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:31:19.721793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 22:31:19.721834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 22:31:19.802377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:31:19.802488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:31:19.807167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:31:19.807637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:31:19.863708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:31:19.864270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:31:19.899344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 22:31:19.899547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 22:31:19.909474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 22:31:19.909805       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 22:31:19.911512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 22:31:19.911557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0731 22:31:22.682143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 22:37:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:37:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:38:21 ha-207300 kubelet[2395]: E0731 22:38:21.704814    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:38:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:38:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:38:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:38:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:39:21 ha-207300 kubelet[2395]: E0731 22:39:21.712867    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:39:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:39:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:39:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:39:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:40:17 ha-207300 kubelet[2395]: I0731 22:40:17.857829    2395 topology_manager.go:215] "Topology Admit Handler" podUID="bb1f32dc-091e-4ad4-b2f6-139c4f779c78" podNamespace="default" podName="busybox-fc5497c4f-dmsjq"
	Jul 31 22:40:17 ha-207300 kubelet[2395]: I0731 22:40:17.977791    2395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mc2z\" (UniqueName: \"kubernetes.io/projected/bb1f32dc-091e-4ad4-b2f6-139c4f779c78-kube-api-access-4mc2z\") pod \"busybox-fc5497c4f-dmsjq\" (UID: \"bb1f32dc-091e-4ad4-b2f6-139c4f779c78\") " pod="default/busybox-fc5497c4f-dmsjq"
	Jul 31 22:40:21 ha-207300 kubelet[2395]: I0731 22:40:21.564752    2395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-dmsjq" podStartSLOduration=3.262624699 podStartE2EDuration="4.564702716s" podCreationTimestamp="2024-07-31 22:40:17 +0000 UTC" firstStartedPulling="2024-07-31 22:40:19.314049274 +0000 UTC m=+537.899494002" lastFinishedPulling="2024-07-31 22:40:20.616127291 +0000 UTC m=+539.201572019" observedRunningTime="2024-07-31 22:40:21.564405612 +0000 UTC m=+540.149850340" watchObservedRunningTime="2024-07-31 22:40:21.564702716 +0000 UTC m=+540.150147444"
	Jul 31 22:40:21 ha-207300 kubelet[2395]: E0731 22:40:21.725444    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:40:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:40:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:40:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:40:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:41:21 ha-207300 kubelet[2395]: E0731 22:41:21.715813    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:41:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:41:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:41:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:41:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:41:16.438426    8708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-207300 -n ha-207300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-207300 -n ha-207300: (12.4791892s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-207300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (46.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Non-zero exit: out/minikube-windows-amd64.exe profile list --output json: exit status 1 (7.035747s)

                                                
                                                
** stderr ** 
	W0731 22:58:13.348786    8068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
ha_test.go:392: failed to list profiles with json format. args "out/minikube-windows-amd64.exe profile list --output json": exit status 1
ha_test.go:398: failed to decode json from profile list: args "out/minikube-windows-amd64.exe profile list --output json": unexpected end of JSON input
ha_test.go:411: expected the json of 'profile list' to include "ha-207300" but got *""*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-207300 -n ha-207300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-207300 -n ha-207300: (14.3624678s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 logs -n 25: (9.9314843s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:52 UTC | 31 Jul 24 22:52 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:52 UTC | 31 Jul 24 22:52 UTC |
	|         | ha-207300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:52 UTC | 31 Jul 24 22:52 UTC |
	|         | ha-207300:/home/docker/cp-test_ha-207300-m03_ha-207300.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:53 UTC | 31 Jul 24 22:53 UTC |
	|         | ha-207300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n ha-207300 sudo cat                                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:53 UTC | 31 Jul 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-207300-m03_ha-207300.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:53 UTC | 31 Jul 24 22:53 UTC |
	|         | ha-207300-m02:/home/docker/cp-test_ha-207300-m03_ha-207300-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:53 UTC | 31 Jul 24 22:53 UTC |
	|         | ha-207300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n ha-207300-m02 sudo cat                                                                                   | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:53 UTC | 31 Jul 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-207300-m03_ha-207300-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:53 UTC | 31 Jul 24 22:54 UTC |
	|         | ha-207300-m04:/home/docker/cp-test_ha-207300-m03_ha-207300-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:54 UTC | 31 Jul 24 22:54 UTC |
	|         | ha-207300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n ha-207300-m04 sudo cat                                                                                   | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:54 UTC | 31 Jul 24 22:54 UTC |
	|         | /home/docker/cp-test_ha-207300-m03_ha-207300-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-207300 cp testdata\cp-test.txt                                                                                         | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:54 UTC | 31 Jul 24 22:54 UTC |
	|         | ha-207300-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:54 UTC | 31 Jul 24 22:54 UTC |
	|         | ha-207300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:54 UTC | 31 Jul 24 22:54 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:54 UTC | 31 Jul 24 22:55 UTC |
	|         | ha-207300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:55 UTC | 31 Jul 24 22:55 UTC |
	|         | ha-207300:/home/docker/cp-test_ha-207300-m04_ha-207300.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:55 UTC | 31 Jul 24 22:55 UTC |
	|         | ha-207300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n ha-207300 sudo cat                                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:55 UTC | 31 Jul 24 22:55 UTC |
	|         | /home/docker/cp-test_ha-207300-m04_ha-207300.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:55 UTC | 31 Jul 24 22:56 UTC |
	|         | ha-207300-m02:/home/docker/cp-test_ha-207300-m04_ha-207300-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:56 UTC | 31 Jul 24 22:56 UTC |
	|         | ha-207300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n ha-207300-m02 sudo cat                                                                                   | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:56 UTC | 31 Jul 24 22:56 UTC |
	|         | /home/docker/cp-test_ha-207300-m04_ha-207300-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt                                                                       | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:56 UTC | 31 Jul 24 22:56 UTC |
	|         | ha-207300-m03:/home/docker/cp-test_ha-207300-m04_ha-207300-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n                                                                                                          | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:56 UTC | 31 Jul 24 22:56 UTC |
	|         | ha-207300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-207300 ssh -n ha-207300-m03 sudo cat                                                                                   | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:56 UTC | 31 Jul 24 22:56 UTC |
	|         | /home/docker/cp-test_ha-207300-m04_ha-207300-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-207300 node stop m02 -v=7                                                                                              | ha-207300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 22:56 UTC | 31 Jul 24 22:57 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:28:20
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:28:20.394898    9488 out.go:291] Setting OutFile to fd 1524 ...
	I0731 22:28:20.395358    9488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:28:20.395436    9488 out.go:304] Setting ErrFile to fd 1528...
	I0731 22:28:20.395512    9488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:28:20.416409    9488 out.go:298] Setting JSON to false
	I0731 22:28:20.418993    9488 start.go:129] hostinfo: {"hostname":"minikube6","uptime":540842,"bootTime":1721924058,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:28:20.418993    9488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:28:20.427763    9488 out.go:177] * [ha-207300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:28:20.438483    9488 notify.go:220] Checking for updates...
	I0731 22:28:20.438871    9488 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:28:20.441706    9488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:28:20.444315    9488 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:28:20.447232    9488 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:28:20.449251    9488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:28:20.453233    9488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:28:25.468913    9488 out.go:177] * Using the hyperv driver based on user configuration
	I0731 22:28:25.472010    9488 start.go:297] selected driver: hyperv
	I0731 22:28:25.472010    9488 start.go:901] validating driver "hyperv" against <nil>
	I0731 22:28:25.472010    9488 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:28:25.519913    9488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:28:25.520754    9488 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:28:25.520754    9488 cni.go:84] Creating CNI manager for ""
	I0731 22:28:25.520754    9488 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 22:28:25.520754    9488 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:28:25.521744    9488 start.go:340] cluster config:
	{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:28:25.521857    9488 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:28:25.527878    9488 out.go:177] * Starting "ha-207300" primary control-plane node in "ha-207300" cluster
	I0731 22:28:25.530614    9488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 22:28:25.530614    9488 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 22:28:25.530614    9488 cache.go:56] Caching tarball of preloaded images
	I0731 22:28:25.531324    9488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 22:28:25.531324    9488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 22:28:25.531985    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:28:25.531985    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json: {Name:mk44506ea483dff1f2f73c4d37ad7611d3f92c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:28:25.533281    9488 start.go:360] acquireMachinesLock for ha-207300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:28:25.533281    9488 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-207300"
	I0731 22:28:25.533281    9488 start.go:93] Provisioning new machine with config: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:28:25.533281    9488 start.go:125] createHost starting for "" (driver="hyperv")
	I0731 22:28:25.537219    9488 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:28:25.538220    9488 start.go:159] libmachine.API.Create for "ha-207300" (driver="hyperv")
	I0731 22:28:25.538220    9488 client.go:168] LocalClient.Create starting
	I0731 22:28:25.538220    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 22:28:25.538220    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:28:25.539217    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:28:25.539217    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 22:28:25.539217    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:28:25.539217    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:28:25.539217    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 22:28:27.453572    9488 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 22:28:27.453572    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:27.454245    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 22:28:29.054801    9488 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 22:28:29.054801    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:29.054889    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:28:30.482751    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:28:30.483674    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:30.483779    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:28:33.815521    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:28:33.816023    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:33.818487    9488 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:28:34.307608    9488 main.go:141] libmachine: Creating SSH key...
	I0731 22:28:34.597133    9488 main.go:141] libmachine: Creating VM...
	I0731 22:28:34.597133    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:28:37.245341    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:28:37.245837    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:37.245837    9488 main.go:141] libmachine: Using switch "Default Switch"
	I0731 22:28:37.245997    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:28:38.900949    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:28:38.901177    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:38.901177    9488 main.go:141] libmachine: Creating VHD
	I0731 22:28:38.901177    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 22:28:42.503479    9488 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 47418C26-BE3D-45A3-8E9D-DC120EE42026
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 22:28:42.504582    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:42.504582    9488 main.go:141] libmachine: Writing magic tar header
	I0731 22:28:42.504676    9488 main.go:141] libmachine: Writing SSH key tar header
	I0731 22:28:42.515468    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 22:28:45.544097    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:45.544625    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:45.544625    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\disk.vhd' -SizeBytes 20000MB
	I0731 22:28:47.960288    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:47.960288    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:47.960288    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-207300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0731 22:28:51.444071    9488 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-207300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 22:28:51.444260    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:51.444347    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-207300 -DynamicMemoryEnabled $false
	I0731 22:28:53.544413    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:53.545444    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:53.545488    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-207300 -Count 2
	I0731 22:28:55.628110    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:55.628110    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:55.628110    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-207300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\boot2docker.iso'
	I0731 22:28:58.068995    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:28:58.068995    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:28:58.069065    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-207300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\disk.vhd'
	I0731 22:29:00.612970    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:00.612970    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:00.612970    9488 main.go:141] libmachine: Starting VM...
	I0731 22:29:00.612970    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-207300
	I0731 22:29:03.633050    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:03.633050    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:03.633877    9488 main.go:141] libmachine: Waiting for host to start...
	I0731 22:29:03.633877    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:05.962295    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:05.963066    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:05.963066    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:08.496443    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:08.496443    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:09.503400    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:11.776457    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:11.776625    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:11.776735    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:14.338679    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:14.338679    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:15.348687    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:17.596512    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:17.596570    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:17.596570    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:20.102842    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:20.102842    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:21.108566    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:23.270573    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:23.270573    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:23.271015    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:25.726644    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:29:25.726644    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:26.730075    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:28.980714    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:28.980714    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:28.981474    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:31.465023    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:31.465023    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:31.465613    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:33.520265    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:33.520265    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:33.520371    9488 machine.go:94] provisionDockerMachine start ...
	I0731 22:29:33.520538    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:35.613964    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:35.614256    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:35.614256    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:38.000547    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:38.001517    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:38.006578    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:29:38.017254    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:29:38.017254    9488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:29:38.156576    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 22:29:38.156665    9488 buildroot.go:166] provisioning hostname "ha-207300"
	I0731 22:29:38.156665    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:40.203975    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:40.205205    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:40.205333    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:42.586902    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:42.587461    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:42.592840    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:29:42.593030    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:29:42.593030    9488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-207300 && echo "ha-207300" | sudo tee /etc/hostname
	I0731 22:29:42.742946    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-207300
	
	I0731 22:29:42.742946    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:44.758498    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:44.758770    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:44.758834    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:47.174994    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:47.174994    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:47.180447    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:29:47.181077    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:29:47.181077    9488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-207300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-207300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-207300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:29:47.332698    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:29:47.332698    9488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 22:29:47.332698    9488 buildroot.go:174] setting up certificates
	I0731 22:29:47.332698    9488 provision.go:84] configureAuth start
	I0731 22:29:47.332698    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:49.353848    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:49.354728    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:49.354728    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:51.792088    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:51.793009    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:51.793009    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:53.852024    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:53.852424    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:53.852424    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:29:56.274044    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:29:56.274044    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:56.274044    9488 provision.go:143] copyHostCerts
	I0731 22:29:56.274044    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 22:29:56.274044    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 22:29:56.274044    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 22:29:56.274641    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 22:29:56.275860    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 22:29:56.275860    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 22:29:56.275860    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 22:29:56.276639    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 22:29:56.277764    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 22:29:56.278065    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 22:29:56.278158    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 22:29:56.278531    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 22:29:56.279227    9488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-207300 san=[127.0.0.1 172.17.21.92 ha-207300 localhost minikube]
	I0731 22:29:56.550278    9488 provision.go:177] copyRemoteCerts
	I0731 22:29:56.561199    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:29:56.561199    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:29:58.625652    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:29:58.625983    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:29:58.625983    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:01.078020    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:01.079180    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:01.079716    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:30:01.182473    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6212155s)
	I0731 22:30:01.182473    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 22:30:01.183091    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0731 22:30:01.226479    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 22:30:01.226479    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 22:30:01.271843    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 22:30:01.272343    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 22:30:01.322908    9488 provision.go:87] duration metric: took 13.9900327s to configureAuth
	I0731 22:30:01.322908    9488 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:30:01.323787    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:30:01.323787    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:03.463055    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:03.463055    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:03.463055    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:05.980716    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:05.980838    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:05.986686    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:05.987229    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:05.987229    9488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 22:30:06.116413    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 22:30:06.116413    9488 buildroot.go:70] root file system type: tmpfs
	I0731 22:30:06.116413    9488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 22:30:06.116413    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:08.251673    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:08.252344    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:08.252475    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:10.717528    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:10.718261    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:10.722884    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:10.723757    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:10.723757    9488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 22:30:10.878778    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 22:30:10.878920    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:12.958022    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:12.958022    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:12.958249    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:15.639510    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:15.639510    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:15.644748    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:15.645592    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:15.645592    9488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 22:30:17.800096    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 22:30:17.800096    9488 machine.go:97] duration metric: took 44.2791645s to provisionDockerMachine
	I0731 22:30:17.800096    9488 client.go:171] duration metric: took 1m52.2604589s to LocalClient.Create
	I0731 22:30:17.800096    9488 start.go:167] duration metric: took 1m52.2604589s to libmachine.API.Create "ha-207300"
	I0731 22:30:17.800096    9488 start.go:293] postStartSetup for "ha-207300" (driver="hyperv")
	I0731 22:30:17.800096    9488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:30:17.812161    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:30:17.812161    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:19.871610    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:19.871842    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:19.871842    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:22.279632    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:22.280702    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:22.280998    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:30:22.388137    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5758811s)
	I0731 22:30:22.400208    9488 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:30:22.407127    9488 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:30:22.407238    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 22:30:22.407788    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 22:30:22.408772    9488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 22:30:22.408841    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 22:30:22.419466    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:30:22.435985    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 22:30:22.479671    9488 start.go:296] duration metric: took 4.679515s for postStartSetup
	I0731 22:30:22.482502    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:24.480653    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:24.480954    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:24.481028    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:26.865667    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:26.865667    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:26.866710    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:30:26.869702    9488 start.go:128] duration metric: took 2m1.3348894s to createHost
	I0731 22:30:26.869784    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:28.872242    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:28.872242    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:28.872675    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:31.347884    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:31.348665    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:31.353944    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:31.354521    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:31.354713    9488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:30:31.494046    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465031.514556384
	
	I0731 22:30:31.494046    9488 fix.go:216] guest clock: 1722465031.514556384
	I0731 22:30:31.494046    9488 fix.go:229] Guest: 2024-07-31 22:30:31.514556384 +0000 UTC Remote: 2024-07-31 22:30:26.8697028 +0000 UTC m=+126.624504601 (delta=4.644853584s)
	I0731 22:30:31.494046    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:33.532157    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:33.532157    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:33.532529    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:35.904058    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:35.904058    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:35.910721    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:30:35.911109    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.21.92 22 <nil> <nil>}
	I0731 22:30:35.911109    9488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722465031
	I0731 22:30:36.061294    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 22:30:31 UTC 2024
	
	I0731 22:30:36.061294    9488 fix.go:236] clock set: Wed Jul 31 22:30:31 UTC 2024
	 (err=<nil>)
	I0731 22:30:36.061294    9488 start.go:83] releasing machines lock for "ha-207300", held for 2m10.5263644s
	I0731 22:30:36.061294    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:38.053349    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:38.053349    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:38.054295    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:40.423306    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:40.423438    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:40.427579    9488 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 22:30:40.427758    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:40.438024    9488 ssh_runner.go:195] Run: cat /version.json
	I0731 22:30:40.438167    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:30:42.592019    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:42.592019    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:42.592669    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:42.592881    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:30:42.592881    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:42.592997    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:30:45.188078    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:45.188391    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:45.188783    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:30:45.209111    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:30:45.209170    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:30:45.209531    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:30:45.283141    9488 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8554199s)
	W0731 22:30:45.283280    9488 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 22:30:45.300329    9488 ssh_runner.go:235] Completed: cat /version.json: (4.8622061s)
	I0731 22:30:45.313952    9488 ssh_runner.go:195] Run: systemctl --version
	I0731 22:30:45.333719    9488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:30:45.341629    9488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:30:45.353119    9488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:30:45.382683    9488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:30:45.382762    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:30:45.382839    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0731 22:30:45.393208    9488 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 22:30:45.393208    9488 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 22:30:45.428666    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 22:30:45.457117    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 22:30:45.474944    9488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 22:30:45.485936    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 22:30:45.515754    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:30:45.546839    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 22:30:45.575005    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:30:45.604385    9488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:30:45.633769    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 22:30:45.664139    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 22:30:45.692973    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 22:30:45.721365    9488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:30:45.752204    9488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:30:45.784307    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:30:45.966448    9488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 22:30:46.000194    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:30:46.012584    9488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 22:30:46.052332    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:30:46.091711    9488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:30:46.136343    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:30:46.170522    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:30:46.203683    9488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 22:30:46.260753    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:30:46.280455    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:30:46.321218    9488 ssh_runner.go:195] Run: which cri-dockerd
	I0731 22:30:46.338019    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 22:30:46.353721    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 22:30:46.399106    9488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 22:30:46.572355    9488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 22:30:46.728599    9488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 22:30:46.728697    9488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 22:30:46.772433    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:30:46.974151    9488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 22:30:49.509568    9488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5353244s)
	I0731 22:30:49.521135    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 22:30:49.553097    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:30:49.585108    9488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 22:30:49.775682    9488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 22:30:49.944903    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:30:50.135217    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 22:30:50.173132    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:30:50.204743    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:30:50.424599    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 22:30:50.523208    9488 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 22:30:50.533978    9488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 22:30:50.543018    9488 start.go:563] Will wait 60s for crictl version
	I0731 22:30:50.554619    9488 ssh_runner.go:195] Run: which crictl
	I0731 22:30:50.571122    9488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:30:50.622569    9488 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 22:30:50.632398    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:30:50.672114    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:30:50.701078    9488 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 22:30:50.701078    9488 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 22:30:50.706085    9488 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 22:30:50.706085    9488 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 22:30:50.706085    9488 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 22:30:50.706085    9488 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 22:30:50.709070    9488 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 22:30:50.709070    9488 ip.go:210] interface addr: 172.17.16.1/20
	I0731 22:30:50.719070    9488 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 22:30:50.725463    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:30:50.757660    9488 kubeadm.go:883] updating cluster {Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 22:30:50.757660    9488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 22:30:50.767059    9488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 22:30:50.795754    9488 docker.go:685] Got preloaded images: 
	I0731 22:30:50.795754    9488 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0731 22:30:50.808750    9488 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 22:30:50.833087    9488 ssh_runner.go:195] Run: which lz4
	I0731 22:30:50.838550    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 22:30:50.849237    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 22:30:50.855487    9488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 22:30:50.855487    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0731 22:30:52.347539    9488 docker.go:649] duration metric: took 1.5089697s to copy over tarball
	I0731 22:30:52.359557    9488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 22:31:01.117509    9488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7577834s)
	I0731 22:31:01.117509    9488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 22:31:01.179265    9488 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 22:31:01.196219    9488 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0731 22:31:01.236877    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:31:01.426306    9488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 22:31:04.740562    9488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3141708s)
	I0731 22:31:04.749824    9488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 22:31:04.783559    9488 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 22:31:04.783624    9488 cache_images.go:84] Images are preloaded, skipping loading
	I0731 22:31:04.783707    9488 kubeadm.go:934] updating node { 172.17.21.92 8443 v1.30.3 docker true true} ...
	I0731 22:31:04.783974    9488 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-207300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.21.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:31:04.793268    9488 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 22:31:04.858907    9488 cni.go:84] Creating CNI manager for ""
	I0731 22:31:04.858907    9488 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 22:31:04.858907    9488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 22:31:04.858907    9488 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.21.92 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-207300 NodeName:ha-207300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.21.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.21.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 22:31:04.859497    9488 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.21.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-207300"
	  kubeletExtraArgs:
	    node-ip: 172.17.21.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.21.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 22:31:04.859606    9488 kube-vip.go:115] generating kube-vip config ...
	I0731 22:31:04.872143    9488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:31:04.900301    9488 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:31:04.900469    9488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:31:04.912499    9488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:31:04.931663    9488 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 22:31:04.943927    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 22:31:04.961730    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0731 22:31:04.988322    9488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:31:05.015148    9488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0731 22:31:05.041139    9488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0731 22:31:05.081113    9488 ssh_runner.go:195] Run: grep 172.17.31.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:31:05.087227    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:31:05.114969    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:31:05.303405    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:31:05.335090    9488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300 for IP: 172.17.21.92
	I0731 22:31:05.335140    9488 certs.go:194] generating shared ca certs ...
	I0731 22:31:05.335140    9488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:05.335942    9488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 22:31:05.336494    9488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 22:31:05.336686    9488 certs.go:256] generating profile certs ...
	I0731 22:31:05.337485    9488 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key
	I0731 22:31:05.337553    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.crt with IP's: []
	I0731 22:31:05.848622    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.crt ...
	I0731 22:31:05.848622    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.crt: {Name:mk18891580ce23bacd68b0f7aef728a6870066fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:05.850535    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key ...
	I0731 22:31:05.850535    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key: {Name:mk75223d6c3518ef73cc5bc219634d912f36568b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:05.851544    9488 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.bdfdcf79
	I0731 22:31:05.851544    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.bdfdcf79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.21.92 172.17.31.254]
	I0731 22:31:06.204999    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.bdfdcf79 ...
	I0731 22:31:06.204999    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.bdfdcf79: {Name:mk8e736c4551099b4a6b3f35f2ed10d6cbb51124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:06.206035    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.bdfdcf79 ...
	I0731 22:31:06.206035    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.bdfdcf79: {Name:mkd4b5bba321ebbd39df52b27c1c33413da1a8c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:06.207028    9488 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.bdfdcf79 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt
	I0731 22:31:06.220043    9488 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.bdfdcf79 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key
	I0731 22:31:06.221020    9488 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key
	I0731 22:31:06.221697    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt with IP's: []
	I0731 22:31:06.440827    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt ...
	I0731 22:31:06.440827    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt: {Name:mkf923965ffb62fe1b9ad3347bfbede9812f45a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:06.441828    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key ...
	I0731 22:31:06.441828    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key: {Name:mk02bb7795dea1704912df99fedf92def1afb132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:06.443500    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:31:06.444010    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:31:06.444118    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:31:06.444118    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:31:06.444118    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:31:06.444118    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:31:06.444703    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:31:06.452699    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:31:06.453710    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 22:31:06.453710    9488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 22:31:06.453710    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 22:31:06.453710    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 22:31:06.454720    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 22:31:06.454720    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 22:31:06.454720    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 22:31:06.455781    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 22:31:06.455781    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:31:06.455781    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 22:31:06.456700    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:31:06.494569    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:31:06.535968    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:31:06.575958    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 22:31:06.614203    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 22:31:06.659166    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 22:31:06.699085    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:31:06.745991    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:31:06.788748    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 22:31:06.831627    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:31:06.876359    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 22:31:06.916726    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 22:31:06.959329    9488 ssh_runner.go:195] Run: openssl version
	I0731 22:31:06.979683    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 22:31:07.012283    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 22:31:07.019829    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 22:31:07.031106    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 22:31:07.060520    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:31:07.089991    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:31:07.119935    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:31:07.127930    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:31:07.139217    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:31:07.157369    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:31:07.185396    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 22:31:07.212372    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 22:31:07.218739    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 22:31:07.228839    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 22:31:07.248953    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 22:31:07.278097    9488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:31:07.284810    9488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:31:07.285105    9488 kubeadm.go:392] StartCluster: {Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:31:07.292998    9488 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 22:31:07.330731    9488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 22:31:07.363738    9488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 22:31:07.391733    9488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 22:31:07.408158    9488 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 22:31:07.408158    9488 kubeadm.go:157] found existing configuration files:
	
	I0731 22:31:07.423277    9488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 22:31:07.439898    9488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 22:31:07.449945    9488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 22:31:07.478872    9488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 22:31:07.494349    9488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 22:31:07.510175    9488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 22:31:07.539391    9488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 22:31:07.558062    9488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 22:31:07.570065    9488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 22:31:07.597086    9488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 22:31:07.612594    9488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 22:31:07.623493    9488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 22:31:07.639148    9488 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 22:31:08.058482    9488 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 22:31:22.132529    9488 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 22:31:22.132529    9488 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 22:31:22.132529    9488 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 22:31:22.132529    9488 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 22:31:22.133509    9488 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 22:31:22.133509    9488 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 22:31:22.137503    9488 out.go:204]   - Generating certificates and keys ...
	I0731 22:31:22.137503    9488 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 22:31:22.137503    9488 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 22:31:22.137503    9488 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-207300 localhost] and IPs [172.17.21.92 127.0.0.1 ::1]
	I0731 22:31:22.138520    9488 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 22:31:22.139533    9488 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-207300 localhost] and IPs [172.17.21.92 127.0.0.1 ::1]
	I0731 22:31:22.139533    9488 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 22:31:22.139533    9488 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 22:31:22.139533    9488 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 22:31:22.139533    9488 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 22:31:22.139533    9488 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 22:31:22.139533    9488 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 22:31:22.140545    9488 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 22:31:22.140545    9488 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 22:31:22.140545    9488 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 22:31:22.140545    9488 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 22:31:22.140545    9488 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 22:31:22.146503    9488 out.go:204]   - Booting up control plane ...
	I0731 22:31:22.146503    9488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 22:31:22.146503    9488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 22:31:22.146503    9488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 22:31:22.147540    9488 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 22:31:22.147540    9488 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 22:31:22.147540    9488 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 22:31:22.147540    9488 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 22:31:22.148502    9488 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 22:31:22.148502    9488 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003493206s
	I0731 22:31:22.148502    9488 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 22:31:22.148502    9488 kubeadm.go:310] [api-check] The API server is healthy after 7.002058178s
	I0731 22:31:22.148502    9488 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 22:31:22.148502    9488 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 22:31:22.149510    9488 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 22:31:22.149510    9488 kubeadm.go:310] [mark-control-plane] Marking the node ha-207300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 22:31:22.149510    9488 kubeadm.go:310] [bootstrap-token] Using token: 3zaf11.kkfeag4mao0twvx0
	I0731 22:31:22.152500    9488 out.go:204]   - Configuring RBAC rules ...
	I0731 22:31:22.153501    9488 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 22:31:22.153501    9488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 22:31:22.153501    9488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 22:31:22.153501    9488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 22:31:22.154513    9488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 22:31:22.154513    9488 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 22:31:22.154513    9488 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 22:31:22.154513    9488 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 22:31:22.154513    9488 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 22:31:22.154513    9488 kubeadm.go:310] 
	I0731 22:31:22.154513    9488 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 22:31:22.154513    9488 kubeadm.go:310] 
	I0731 22:31:22.155512    9488 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 22:31:22.155512    9488 kubeadm.go:310] 
	I0731 22:31:22.155512    9488 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 22:31:22.155512    9488 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 22:31:22.155512    9488 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 22:31:22.155512    9488 kubeadm.go:310] 
	I0731 22:31:22.155512    9488 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 22:31:22.155512    9488 kubeadm.go:310] 
	I0731 22:31:22.155512    9488 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 22:31:22.155512    9488 kubeadm.go:310] 
	I0731 22:31:22.156509    9488 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 22:31:22.156509    9488 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 22:31:22.156509    9488 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 22:31:22.156509    9488 kubeadm.go:310] 
	I0731 22:31:22.156509    9488 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 22:31:22.156509    9488 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 22:31:22.156509    9488 kubeadm.go:310] 
	I0731 22:31:22.157506    9488 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3zaf11.kkfeag4mao0twvx0 \
	I0731 22:31:22.157506    9488 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf \
	I0731 22:31:22.157506    9488 kubeadm.go:310] 	--control-plane 
	I0731 22:31:22.157506    9488 kubeadm.go:310] 
	I0731 22:31:22.157506    9488 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 22:31:22.157506    9488 kubeadm.go:310] 
	I0731 22:31:22.157506    9488 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3zaf11.kkfeag4mao0twvx0 \
	I0731 22:31:22.157506    9488 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf 
	I0731 22:31:22.158513    9488 cni.go:84] Creating CNI manager for ""
	I0731 22:31:22.158513    9488 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 22:31:22.162503    9488 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 22:31:22.177505    9488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 22:31:22.186174    9488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 22:31:22.186174    9488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 22:31:22.233642    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 22:31:22.869001    9488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 22:31:22.882532    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:22.885157    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-207300 minikube.k8s.io/updated_at=2024_07_31T22_31_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-207300 minikube.k8s.io/primary=true
	I0731 22:31:22.909123    9488 ops.go:34] apiserver oom_adj: -16
	I0731 22:31:23.093292    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:23.601389    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:24.108247    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:24.595341    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:25.096733    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:25.598273    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:26.097963    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:26.599824    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:27.102146    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:27.609592    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:28.109528    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:28.596168    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:29.099763    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:29.601088    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:30.103011    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:30.604590    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:31.109646    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:31.595619    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:32.099737    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:32.600421    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:33.109257    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:33.608656    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:34.099608    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:34.604087    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:35.098013    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:35.604303    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:31:35.716632    9488 kubeadm.go:1113] duration metric: took 12.8474108s to wait for elevateKubeSystemPrivileges
	I0731 22:31:35.716632    9488 kubeadm.go:394] duration metric: took 28.4311651s to StartCluster
	I0731 22:31:35.716632    9488 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:35.716632    9488 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:31:35.718737    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:31:35.719594    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 22:31:35.719594    9488 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:31:35.719594    9488 start.go:241] waiting for startup goroutines ...
	I0731 22:31:35.719594    9488 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 22:31:35.719594    9488 addons.go:69] Setting storage-provisioner=true in profile "ha-207300"
	I0731 22:31:35.719594    9488 addons.go:234] Setting addon storage-provisioner=true in "ha-207300"
	I0731 22:31:35.720580    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:31:35.720580    9488 addons.go:69] Setting default-storageclass=true in profile "ha-207300"
	I0731 22:31:35.720580    9488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-207300"
	I0731 22:31:35.720580    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:31:35.720580    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:35.721593    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:35.917386    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 22:31:36.279539    9488 start.go:971] {"host.minikube.internal": 172.17.16.1} host record injected into CoreDNS's ConfigMap
	I0731 22:31:37.993009    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:37.993219    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:37.993298    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:37.993298    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:37.994649    9488 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:31:37.995691    9488 kapi.go:59] client config for ha-207300: &rest.Config{Host:"https://172.17.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 22:31:37.996130    9488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 22:31:37.997094    9488 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 22:31:37.998092    9488 addons.go:234] Setting addon default-storageclass=true in "ha-207300"
	I0731 22:31:37.998092    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:31:37.999124    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:37.999124    9488 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:31:37.999124    9488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 22:31:37.999124    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:40.261168    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:40.261836    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:40.261836    9488 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 22:31:40.261836    9488 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 22:31:40.261836    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:31:40.397591    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:40.397934    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:40.398146    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:31:42.490146    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:31:42.490146    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:42.490268    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:31:43.067095    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:31:43.068098    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:43.068098    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:31:43.206158    9488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:31:45.012766    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:31:45.012766    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:45.014005    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:31:45.149489    9488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 22:31:45.284245    9488 round_trippers.go:463] GET https://172.17.31.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 22:31:45.285255    9488 round_trippers.go:469] Request Headers:
	I0731 22:31:45.285255    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:31:45.285255    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:31:45.297313    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:31:45.298884    9488 round_trippers.go:463] PUT https://172.17.31.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 22:31:45.298929    9488 round_trippers.go:469] Request Headers:
	I0731 22:31:45.298929    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:31:45.298929    9488 round_trippers.go:473]     Content-Type: application/json
	I0731 22:31:45.298929    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:31:45.302936    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:31:45.306766    9488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 22:31:45.311289    9488 addons.go:510] duration metric: took 9.5915727s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 22:31:45.311289    9488 start.go:246] waiting for cluster config update ...
	I0731 22:31:45.311289    9488 start.go:255] writing updated cluster config ...
	I0731 22:31:45.314730    9488 out.go:177] 
	I0731 22:31:45.325998    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:31:45.325998    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:31:45.331016    9488 out.go:177] * Starting "ha-207300-m02" control-plane node in "ha-207300" cluster
	I0731 22:31:45.335989    9488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 22:31:45.335989    9488 cache.go:56] Caching tarball of preloaded images
	I0731 22:31:45.335989    9488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 22:31:45.337002    9488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 22:31:45.337002    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:31:45.338996    9488 start.go:360] acquireMachinesLock for ha-207300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:31:45.338996    9488 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-207300-m02"
	I0731 22:31:45.338996    9488 start.go:93] Provisioning new machine with config: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:31:45.338996    9488 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0731 22:31:45.343991    9488 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:31:45.343991    9488 start.go:159] libmachine.API.Create for "ha-207300" (driver="hyperv")
	I0731 22:31:45.343991    9488 client.go:168] LocalClient.Create starting
	I0731 22:31:45.343991    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 22:31:45.343991    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:31:45.343991    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:31:45.344989    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 22:31:45.344989    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:31:45.344989    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:31:45.344989    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 22:31:47.190095    9488 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 22:31:47.190095    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:47.190095    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 22:31:48.857854    9488 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 22:31:48.857854    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:48.858025    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:31:50.374130    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:31:50.374130    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:50.374774    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:31:53.905340    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:31:53.905340    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:53.908508    9488 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:31:54.383288    9488 main.go:141] libmachine: Creating SSH key...
	I0731 22:31:54.678107    9488 main.go:141] libmachine: Creating VM...
	I0731 22:31:54.678107    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:31:57.605842    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:31:57.606730    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:57.606849    9488 main.go:141] libmachine: Using switch "Default Switch"
	I0731 22:31:57.606957    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:31:59.368070    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:31:59.368070    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:31:59.368070    9488 main.go:141] libmachine: Creating VHD
	I0731 22:31:59.368239    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 22:32:03.099372    9488 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9C3107D0-701E-45FE-9F08-1F10E51140A7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 22:32:03.099473    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:03.099530    9488 main.go:141] libmachine: Writing magic tar header
	I0731 22:32:03.099583    9488 main.go:141] libmachine: Writing SSH key tar header
	I0731 22:32:03.110082    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 22:32:06.305415    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:06.305415    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:06.305415    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\disk.vhd' -SizeBytes 20000MB
	I0731 22:32:08.866083    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:08.866083    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:08.866469    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-207300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0731 22:32:12.519217    9488 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-207300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 22:32:12.519217    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:12.519307    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-207300-m02 -DynamicMemoryEnabled $false
	I0731 22:32:14.724344    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:14.724344    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:14.724344    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-207300-m02 -Count 2
	I0731 22:32:16.866836    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:16.866836    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:16.867008    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-207300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\boot2docker.iso'
	I0731 22:32:19.374171    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:19.375043    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:19.375116    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-207300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\disk.vhd'
	I0731 22:32:22.054654    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:22.054973    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:22.054973    9488 main.go:141] libmachine: Starting VM...
	I0731 22:32:22.054973    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-207300-m02
	I0731 22:32:25.235013    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:25.235013    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:25.235013    9488 main.go:141] libmachine: Waiting for host to start...
	I0731 22:32:25.235216    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:27.555350    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:27.555862    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:27.555912    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:30.064982    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:30.064982    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:31.067429    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:33.262094    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:33.262094    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:33.262094    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:35.785999    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:35.787012    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:36.793716    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:38.977650    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:38.978033    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:38.978136    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:41.536945    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:41.536945    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:42.541192    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:44.731361    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:44.731361    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:44.731477    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:47.277122    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:32:47.277122    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:48.293756    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:50.584563    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:50.585590    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:50.585642    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:53.122743    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:32:53.123002    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:53.123002    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:55.194673    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:55.194673    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:55.195623    9488 machine.go:94] provisionDockerMachine start ...
	I0731 22:32:55.195764    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:32:57.312221    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:32:57.312396    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:57.312616    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:32:59.797824    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:32:59.798840    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:32:59.803996    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:32:59.814903    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:32:59.814903    9488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:32:59.942650    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 22:32:59.942713    9488 buildroot.go:166] provisioning hostname "ha-207300-m02"
	I0731 22:32:59.942823    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:02.069447    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:02.069447    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:02.069708    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:04.609475    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:04.609915    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:04.617633    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:04.618472    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:04.618636    9488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-207300-m02 && echo "ha-207300-m02" | sudo tee /etc/hostname
	I0731 22:33:04.766635    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-207300-m02
	
	I0731 22:33:04.766635    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:06.857594    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:06.858601    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:06.858601    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:09.399613    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:09.400657    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:09.405618    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:09.406585    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:09.406652    9488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-207300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-207300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-207300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:33:09.560537    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:33:09.560537    9488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 22:33:09.560537    9488 buildroot.go:174] setting up certificates
	I0731 22:33:09.560537    9488 provision.go:84] configureAuth start
	I0731 22:33:09.560537    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:11.645999    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:11.645999    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:11.645999    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:14.150076    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:14.150076    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:14.150076    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:16.228619    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:16.228779    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:16.228779    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:18.724950    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:18.724950    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:18.724950    9488 provision.go:143] copyHostCerts
	I0731 22:33:18.725831    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 22:33:18.725955    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 22:33:18.725955    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 22:33:18.726519    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 22:33:18.727784    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 22:33:18.727899    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 22:33:18.727899    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 22:33:18.728426    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 22:33:18.729715    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 22:33:18.730163    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 22:33:18.730216    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 22:33:18.730216    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 22:33:18.731714    9488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-207300-m02 san=[127.0.0.1 172.17.28.136 ha-207300-m02 localhost minikube]
	I0731 22:33:18.857172    9488 provision.go:177] copyRemoteCerts
	I0731 22:33:18.872231    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:33:18.872231    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:21.020077    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:21.020548    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:21.020636    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:23.525875    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:23.525875    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:23.525875    9488 sshutil.go:53] new ssh client: &{IP:172.17.28.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\id_rsa Username:docker}
	I0731 22:33:23.636615    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7642607s)
	I0731 22:33:23.636615    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 22:33:23.637143    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:33:23.683719    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 22:33:23.683891    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 22:33:23.727342    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 22:33:23.727342    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 22:33:23.775261    9488 provision.go:87] duration metric: took 14.214542s to configureAuth
	I0731 22:33:23.775391    9488 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:33:23.776156    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:33:23.776288    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:25.872775    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:25.872775    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:25.873049    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:28.371704    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:28.371704    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:28.377885    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:28.378417    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:28.378417    9488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 22:33:28.513292    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 22:33:28.513292    9488 buildroot.go:70] root file system type: tmpfs
	I0731 22:33:28.513507    9488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 22:33:28.513644    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:30.605466    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:30.606279    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:30.606279    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:33.073433    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:33.073433    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:33.079145    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:33.079752    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:33.079956    9488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.21.92"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 22:33:33.233205    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.21.92
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 22:33:33.233205    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:35.340844    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:35.341024    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:35.341024    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:37.812387    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:37.813436    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:37.819057    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:37.819764    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:37.819764    9488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 22:33:40.064678    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 22:33:40.064678    9488 machine.go:97] duration metric: took 44.8684806s to provisionDockerMachine
	I0731 22:33:40.064678    9488 client.go:171] duration metric: took 1m54.7192249s to LocalClient.Create
	I0731 22:33:40.064678    9488 start.go:167] duration metric: took 1m54.7192249s to libmachine.API.Create "ha-207300"
	I0731 22:33:40.064678    9488 start.go:293] postStartSetup for "ha-207300-m02" (driver="hyperv")
	I0731 22:33:40.064678    9488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:33:40.077684    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:33:40.077684    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:42.201630    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:42.202400    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:42.202400    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:44.638075    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:44.638330    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:44.638766    9488 sshutil.go:53] new ssh client: &{IP:172.17.28.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\id_rsa Username:docker}
	I0731 22:33:44.740188    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6623232s)
	I0731 22:33:44.750770    9488 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:33:44.757841    9488 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:33:44.757841    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 22:33:44.758256    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 22:33:44.759253    9488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 22:33:44.759307    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 22:33:44.770650    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:33:44.788954    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 22:33:44.831059    9488 start.go:296] duration metric: took 4.7663197s for postStartSetup
	I0731 22:33:44.834064    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:46.924567    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:46.924778    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:46.924778    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:49.376115    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:49.376115    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:49.376817    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:33:49.379357    9488 start.go:128] duration metric: took 2m4.0387793s to createHost
	I0731 22:33:49.379357    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:51.488048    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:51.488208    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:51.488208    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:53.951762    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:53.951762    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:53.956837    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:53.957534    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:53.957534    9488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:33:54.086396    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465234.107009733
	
	I0731 22:33:54.086396    9488 fix.go:216] guest clock: 1722465234.107009733
	I0731 22:33:54.086396    9488 fix.go:229] Guest: 2024-07-31 22:33:54.107009733 +0000 UTC Remote: 2024-07-31 22:33:49.3793576 +0000 UTC m=+329.131581301 (delta=4.727652133s)
	I0731 22:33:54.086396    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:33:56.174678    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:33:56.174678    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:56.174950    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:33:58.646546    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:33:58.646753    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:33:58.652287    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:33:58.653074    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.136 22 <nil> <nil>}
	I0731 22:33:58.653074    9488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722465234
	I0731 22:33:58.795094    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 22:33:54 UTC 2024
	
	I0731 22:33:58.795159    9488 fix.go:236] clock set: Wed Jul 31 22:33:54 UTC 2024
	 (err=<nil>)
	I0731 22:33:58.795159    9488 start.go:83] releasing machines lock for "ha-207300-m02", held for 2m13.4544611s
	I0731 22:33:58.795440    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:34:00.879249    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:00.879249    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:00.879437    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:03.392217    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:34:03.393129    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:03.398175    9488 out.go:177] * Found network options:
	I0731 22:34:03.401366    9488 out.go:177]   - NO_PROXY=172.17.21.92
	W0731 22:34:03.404143    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:34:03.406610    9488 out.go:177]   - NO_PROXY=172.17.21.92
	W0731 22:34:03.408556    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:34:03.409525    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:34:03.412528    9488 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 22:34:03.412528    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:34:03.422527    9488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 22:34:03.422527    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:34:05.603306    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:05.603306    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:05.603306    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:05.603441    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:05.603441    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:05.603573    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:08.197049    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:34:08.197049    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:08.197848    9488 sshutil.go:53] new ssh client: &{IP:172.17.28.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\id_rsa Username:docker}
	I0731 22:34:08.214981    9488 main.go:141] libmachine: [stdout =====>] : 172.17.28.136
	
	I0731 22:34:08.214981    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:08.215537    9488 sshutil.go:53] new ssh client: &{IP:172.17.28.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m02\id_rsa Username:docker}
	I0731 22:34:08.289575    9488 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8769854s)
	W0731 22:34:08.289575    9488 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 22:34:08.307084    9488 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8844945s)
	W0731 22:34:08.308064    9488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:34:08.320649    9488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:34:08.346718    9488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:34:08.346718    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:34:08.347683    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:34:08.393332    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0731 22:34:08.405573    9488 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 22:34:08.405843    9488 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 22:34:08.425203    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 22:34:08.446050    9488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 22:34:08.456837    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 22:34:08.486959    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:34:08.515712    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 22:34:08.548025    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:34:08.580806    9488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:34:08.611467    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 22:34:08.642152    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 22:34:08.670942    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 22:34:08.700239    9488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:34:08.728504    9488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:34:08.757105    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:08.929125    9488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 22:34:08.957558    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:34:08.968354    9488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 22:34:09.000064    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:34:09.029492    9488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:34:09.074099    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:34:09.107080    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:34:09.141664    9488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 22:34:09.201677    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:34:09.223438    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:34:09.268180    9488 ssh_runner.go:195] Run: which cri-dockerd
	I0731 22:34:09.284435    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 22:34:09.302149    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 22:34:09.352647    9488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 22:34:09.545846    9488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 22:34:09.721336    9488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 22:34:09.721447    9488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 22:34:09.768087    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:09.965282    9488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 22:34:12.519645    9488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5543304s)
	I0731 22:34:12.531683    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 22:34:12.564079    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:34:12.596628    9488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 22:34:12.788924    9488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 22:34:12.968374    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:13.159740    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 22:34:13.196480    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:34:13.230968    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:13.421009    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 22:34:13.537022    9488 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 22:34:13.549040    9488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 22:34:13.558788    9488 start.go:563] Will wait 60s for crictl version
	I0731 22:34:13.571894    9488 ssh_runner.go:195] Run: which crictl
	I0731 22:34:13.588905    9488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:34:13.646588    9488 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 22:34:13.655272    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:34:13.698780    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:34:13.737974    9488 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 22:34:13.741019    9488 out.go:177]   - env NO_PROXY=172.17.21.92
	I0731 22:34:13.743628    9488 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 22:34:13.747384    9488 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 22:34:13.748334    9488 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 22:34:13.748334    9488 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 22:34:13.748334    9488 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 22:34:13.750649    9488 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 22:34:13.750649    9488 ip.go:210] interface addr: 172.17.16.1/20
	I0731 22:34:13.765420    9488 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 22:34:13.771337    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:34:13.792602    9488 mustload.go:65] Loading cluster: ha-207300
	I0731 22:34:13.793258    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:34:13.793743    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:34:15.904890    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:15.904890    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:15.904890    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:34:15.905657    9488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300 for IP: 172.17.28.136
	I0731 22:34:15.905657    9488 certs.go:194] generating shared ca certs ...
	I0731 22:34:15.905657    9488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:34:15.906556    9488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 22:34:15.906764    9488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 22:34:15.907307    9488 certs.go:256] generating profile certs ...
	I0731 22:34:15.907500    9488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key
	I0731 22:34:15.908040    9488 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.48f058e2
	I0731 22:34:15.908204    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.48f058e2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.21.92 172.17.28.136 172.17.31.254]
	I0731 22:34:16.052368    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.48f058e2 ...
	I0731 22:34:16.052368    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.48f058e2: {Name:mk6848f579dde66d07ff396b0f8e1aa80ebe6f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:34:16.054652    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.48f058e2 ...
	I0731 22:34:16.054652    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.48f058e2: {Name:mk67e2845ee35c8c5d6eb5e9e1119a41a08fae97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:34:16.055234    9488 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.48f058e2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt
	I0731 22:34:16.070735    9488 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.48f058e2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key
	I0731 22:34:16.072508    9488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key
	I0731 22:34:16.072508    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:34:16.073041    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:34:16.073130    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:34:16.073130    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:34:16.073130    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:34:16.073130    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:34:16.073889    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:34:16.074516    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:34:16.074516    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 22:34:16.075358    9488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 22:34:16.075477    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 22:34:16.075839    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 22:34:16.076208    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 22:34:16.076522    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 22:34:16.076610    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 22:34:16.077205    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 22:34:16.077352    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:34:16.077559    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 22:34:16.077809    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:34:18.247403    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:18.247403    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:18.247745    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:20.785003    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:34:20.785003    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:20.785074    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:34:20.886039    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 22:34:20.892577    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 22:34:20.926046    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 22:34:20.937429    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0731 22:34:20.968251    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 22:34:20.975004    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 22:34:21.008701    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 22:34:21.016412    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 22:34:21.049502    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 22:34:21.055607    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 22:34:21.086799    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 22:34:21.092650    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 22:34:21.111241    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:34:21.160274    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:34:21.204452    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:34:21.254957    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 22:34:21.299674    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 22:34:21.346523    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 22:34:21.395038    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:34:21.446000    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:34:21.491370    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 22:34:21.537791    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:34:21.582974    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 22:34:21.631284    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 22:34:21.659795    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0731 22:34:21.688923    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 22:34:21.719195    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 22:34:21.748950    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 22:34:21.778337    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 22:34:21.807017    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 22:34:21.851036    9488 ssh_runner.go:195] Run: openssl version
	I0731 22:34:21.873286    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 22:34:21.902791    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 22:34:21.910104    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 22:34:21.920316    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 22:34:21.942242    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:34:21.973783    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:34:22.007640    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:34:22.013885    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:34:22.025224    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:34:22.047232    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:34:22.076713    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 22:34:22.106818    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 22:34:22.113564    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 22:34:22.125269    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 22:34:22.144629    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 22:34:22.175252    9488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:34:22.182261    9488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:34:22.182469    9488 kubeadm.go:934] updating node {m02 172.17.28.136 8443 v1.30.3 docker true true} ...
	I0731 22:34:22.182757    9488 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-207300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.28.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:34:22.182757    9488 kube-vip.go:115] generating kube-vip config ...
	I0731 22:34:22.193866    9488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:34:22.222490    9488 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:34:22.222490    9488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:34:22.235639    9488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:34:22.250822    9488 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 22:34:22.263769    9488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 22:34:22.284827    9488 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet
	I0731 22:34:22.284827    9488 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm
	I0731 22:34:22.284942    9488 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl
	I0731 22:34:23.516723    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:34:23.530934    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:34:23.539308    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 22:34:23.539591    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 22:34:28.189806    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:34:28.202298    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:34:28.211426    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 22:34:28.211426    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 22:34:33.004238    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:34:33.026887    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:34:33.039506    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:34:33.046395    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 22:34:33.046395    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 22:34:33.641433    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 22:34:33.658690    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 22:34:33.688056    9488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:34:33.720469    9488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0731 22:34:33.770738    9488 ssh_runner.go:195] Run: grep 172.17.31.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:34:33.777508    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:34:33.808590    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:34:33.998606    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:34:34.034312    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:34:34.034568    9488 start.go:317] joinCluster: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:34:34.034568    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 22:34:34.035457    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:34:36.192511    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:34:36.193016    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:36.193016    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:34:38.708440    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:34:38.708440    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:34:38.709406    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:34:39.406557    9488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3710227s)
	I0731 22:34:39.406606    9488 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:34:39.406711    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kpm2id.8a3cjgor80eivp07 --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-207300-m02 --control-plane --apiserver-advertise-address=172.17.28.136 --apiserver-bind-port=8443"
	I0731 22:35:20.418539    9488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kpm2id.8a3cjgor80eivp07 --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-207300-m02 --control-plane --apiserver-advertise-address=172.17.28.136 --apiserver-bind-port=8443": (41.0107538s)
	I0731 22:35:20.418651    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 22:35:21.189274    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-207300-m02 minikube.k8s.io/updated_at=2024_07_31T22_35_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-207300 minikube.k8s.io/primary=false
	I0731 22:35:21.366833    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-207300-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 22:35:21.524598    9488 start.go:319] duration metric: took 47.4894264s to joinCluster
	I0731 22:35:21.524689    9488 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:35:21.525364    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:35:21.529866    9488 out.go:177] * Verifying Kubernetes components...
	I0731 22:35:21.545209    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:35:21.872507    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:35:21.924583    9488 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:35:21.925338    9488 kapi.go:59] client config for ha-207300: &rest.Config{Host:"https://172.17.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 22:35:21.925518    9488 kubeadm.go:483] Overriding stale ClientConfig host https://172.17.31.254:8443 with https://172.17.21.92:8443
	I0731 22:35:21.925943    9488 node_ready.go:35] waiting up to 6m0s for node "ha-207300-m02" to be "Ready" ...
	I0731 22:35:21.926601    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:21.926601    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:21.926601    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:21.926601    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:21.950993    9488 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0731 22:35:22.431992    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:22.431992    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:22.431992    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:22.431992    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:22.437718    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:22.942287    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:22.942287    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:22.942356    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:22.942356    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:22.948151    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:23.434328    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:23.434472    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:23.434540    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:23.434630    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:23.439834    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:23.941224    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:23.941224    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:23.941224    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:23.941224    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:23.944598    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:23.946039    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:24.440628    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:24.440704    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:24.440704    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:24.440704    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:24.454556    9488 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0731 22:35:24.928170    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:24.928283    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:24.928283    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:24.928283    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:24.933218    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:25.434033    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:25.434118    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:25.434118    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:25.434118    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:25.439722    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:25.938844    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:25.938844    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:25.939053    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:25.939053    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:25.943269    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:26.431086    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:26.431086    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:26.431086    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:26.431086    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:26.436665    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:26.437670    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:26.940808    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:26.940808    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:26.940808    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:26.940808    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:26.945505    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:27.433124    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:27.433124    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:27.433124    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:27.433124    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:27.438656    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:27.939564    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:27.939857    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:27.939857    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:27.939857    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:27.945266    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:28.430056    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:28.430227    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:28.430290    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:28.430290    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:28.580301    9488 round_trippers.go:574] Response Status: 200 OK in 150 milliseconds
	I0731 22:35:28.583364    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:28.935461    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:28.935564    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:28.935564    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:28.935564    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:28.940187    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:29.439830    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:29.439830    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:29.439830    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:29.439830    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:29.444016    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:29.928246    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:29.928246    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:29.928246    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:29.928246    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:29.991875    9488 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0731 22:35:30.428067    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:30.428173    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:30.428173    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:30.428173    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:30.474914    9488 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0731 22:35:30.932850    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:30.933134    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:30.933134    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:30.933134    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:30.940831    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:30.942387    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:31.433279    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:31.433279    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:31.433279    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:31.433279    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:31.440212    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:31.933852    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:31.934076    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:31.934076    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:31.934076    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:31.938924    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:32.432197    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:32.432197    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:32.432197    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:32.432197    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:32.437534    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:32.932277    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:32.932335    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:32.932335    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:32.932335    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:32.936614    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:33.433241    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:33.433241    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:33.433241    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:33.433325    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:33.440904    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:33.442238    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:33.934157    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:33.934271    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:33.934271    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:33.934271    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:33.946673    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:35:34.434479    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:34.434479    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:34.434479    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:34.434479    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:34.439106    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:34.934864    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:34.934864    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:34.934864    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:34.934864    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:34.940028    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:35.433408    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:35.433530    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:35.433530    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:35.433530    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:35.439049    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:35.939155    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:35.939237    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:35.939237    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:35.939237    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:35.943596    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:35.944314    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:36.427315    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:36.427315    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:36.427315    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:36.427315    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:36.432847    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:36.928683    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:36.928683    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:36.928683    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:36.928683    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:36.933183    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:37.431032    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:37.431343    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:37.431408    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:37.431408    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:37.435956    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:37.939082    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:37.939082    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:37.939082    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:37.939082    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:37.945448    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:37.946217    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:38.438886    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:38.438970    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:38.439010    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:38.439010    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:38.443653    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:38.938723    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:38.938926    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:38.938926    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:38.938926    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:38.943558    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:39.436728    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:39.436728    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:39.436728    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:39.436728    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:39.441380    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:39.936511    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:39.936852    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:39.936852    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:39.936852    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:39.941132    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:40.437345    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:40.437345    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:40.437445    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:40.437445    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:40.442710    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:40.443867    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:40.936527    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:40.936605    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:40.936605    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:40.936605    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:40.941893    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:41.435674    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:41.435674    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:41.435674    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:41.435674    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:41.441359    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:41.940538    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:41.940538    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:41.940538    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:41.940538    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:41.945201    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:42.438880    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:42.438880    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:42.438880    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:42.438880    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:42.445609    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:42.446977    9488 node_ready.go:53] node "ha-207300-m02" has status "Ready":"False"
	I0731 22:35:42.937906    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:42.937906    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:42.937992    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:42.937992    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:42.942264    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:43.438860    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:43.438860    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.438860    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.438860    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.444916    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:43.926453    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:43.926759    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.926759    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.926759    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.931230    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:43.933087    9488 node_ready.go:49] node "ha-207300-m02" has status "Ready":"True"
	I0731 22:35:43.933146    9488 node_ready.go:38] duration metric: took 22.0068644s for node "ha-207300-m02" to be "Ready" ...
	I0731 22:35:43.933146    9488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:35:43.933233    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:43.933333    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.933333    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.933333    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.940239    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:35:43.950192    9488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.950773    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76ftg
	I0731 22:35:43.950940    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.950940    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.950940    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.956797    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:43.957908    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:43.957952    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.957952    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.958173    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.960936    9488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:35:43.962261    9488 pod_ready.go:92] pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:43.962261    9488 pod_ready.go:81] duration metric: took 11.4872ms for pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.962261    9488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.962261    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8xt8f
	I0731 22:35:43.962261    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.962261    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.962261    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.967734    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:43.968699    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:43.968699    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.968699    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.968699    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.971921    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.973007    9488 pod_ready.go:92] pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:43.973007    9488 pod_ready.go:81] duration metric: took 10.7456ms for pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.973007    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.973393    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300
	I0731 22:35:43.973393    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.973393    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.973393    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.977243    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.978340    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:43.978424    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.978424    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.978424    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.982096    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.983081    9488 pod_ready.go:92] pod "etcd-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:43.983373    9488 pod_ready.go:81] duration metric: took 10.1691ms for pod "etcd-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.983373    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.983467    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300-m02
	I0731 22:35:43.983559    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.983559    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.983589    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.986750    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.987979    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:43.987979    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:43.987979    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:43.987979    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:43.991580    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:35:43.992416    9488 pod_ready.go:92] pod "etcd-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:43.992494    9488 pod_ready.go:81] duration metric: took 9.1208ms for pod "etcd-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:43.992494    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.127498    9488 request.go:629] Waited for 134.6322ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300
	I0731 22:35:44.127498    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300
	I0731 22:35:44.127583    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.127583    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.127583    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.137513    9488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 22:35:44.331063    9488 request.go:629] Waited for 192.8182ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:44.331425    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:44.331425    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.331425    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.331553    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.335813    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:44.336853    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:44.337029    9488 pod_ready.go:81] duration metric: took 344.5314ms for pod "kube-apiserver-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.337029    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.535399    9488 request.go:629] Waited for 198.1243ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m02
	I0731 22:35:44.535776    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m02
	I0731 22:35:44.535776    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.535776    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.535776    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.540357    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:44.737649    9488 request.go:629] Waited for 195.4463ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:44.738013    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:44.738013    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.738013    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.738013    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.743093    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:44.744343    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:44.744430    9488 pod_ready.go:81] duration metric: took 407.3954ms for pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.744430    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:44.940396    9488 request.go:629] Waited for 195.742ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300
	I0731 22:35:44.940790    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300
	I0731 22:35:44.940790    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:44.940790    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:44.940790    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:44.953555    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:35:45.127368    9488 request.go:629] Waited for 172.7575ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:45.127861    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:45.127861    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.127861    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.127861    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.133173    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:45.133888    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:45.133888    9488 pod_ready.go:81] duration metric: took 389.4527ms for pod "kube-controller-manager-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.133888    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.330511    9488 request.go:629] Waited for 195.8988ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m02
	I0731 22:35:45.330764    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m02
	I0731 22:35:45.330764    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.330764    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.330764    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.335195    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:45.535627    9488 request.go:629] Waited for 198.6258ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:45.535746    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:45.535746    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.535746    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.535746    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.541592    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:45.542534    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:45.542534    9488 pod_ready.go:81] duration metric: took 408.6405ms for pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.542608    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-htmnf" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.739119    9488 request.go:629] Waited for 195.7107ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-htmnf
	I0731 22:35:45.739119    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-htmnf
	I0731 22:35:45.739119    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.739119    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.739119    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.744986    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:45.928317    9488 request.go:629] Waited for 182.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:45.928650    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:45.928720    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:45.928720    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:45.928720    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:45.933720    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:45.934510    9488 pod_ready.go:92] pod "kube-proxy-htmnf" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:45.934510    9488 pod_ready.go:81] duration metric: took 391.8967ms for pod "kube-proxy-htmnf" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:45.934510    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5gbs" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.131266    9488 request.go:629] Waited for 196.5224ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5gbs
	I0731 22:35:46.131351    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5gbs
	I0731 22:35:46.131548    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.131548    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.131548    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.136296    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:46.335070    9488 request.go:629] Waited for 197.2709ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:46.335327    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:46.335327    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.335327    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.335327    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.340431    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:46.341834    9488 pod_ready.go:92] pod "kube-proxy-z5gbs" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:46.341834    9488 pod_ready.go:81] duration metric: took 407.3191ms for pod "kube-proxy-z5gbs" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.341834    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.537160    9488 request.go:629] Waited for 195.3234ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300
	I0731 22:35:46.537537    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300
	I0731 22:35:46.537729    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.537729    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.537729    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.545635    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:46.740481    9488 request.go:629] Waited for 194.0038ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:46.740667    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:35:46.740667    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.740667    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.740769    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.751063    9488 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 22:35:46.752805    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:46.752805    9488 pod_ready.go:81] duration metric: took 410.9658ms for pod "kube-scheduler-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.752805    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:46.928652    9488 request.go:629] Waited for 175.6863ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m02
	I0731 22:35:46.928820    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m02
	I0731 22:35:46.928906    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:46.928906    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:46.928906    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:46.933192    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:47.129733    9488 request.go:629] Waited for 195.4818ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:47.129733    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:35:47.129971    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.129971    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.129971    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.134352    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:35:47.135099    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:35:47.135630    9488 pod_ready.go:81] duration metric: took 382.7192ms for pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:35:47.135630    9488 pod_ready.go:38] duration metric: took 3.2024427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:35:47.135630    9488 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:35:47.149378    9488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:35:47.177875    9488 api_server.go:72] duration metric: took 25.6527504s to wait for apiserver process to appear ...
	I0731 22:35:47.177875    9488 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:35:47.177875    9488 api_server.go:253] Checking apiserver healthz at https://172.17.21.92:8443/healthz ...
	I0731 22:35:47.187967    9488 api_server.go:279] https://172.17.21.92:8443/healthz returned 200:
	ok
	I0731 22:35:47.188092    9488 round_trippers.go:463] GET https://172.17.21.92:8443/version
	I0731 22:35:47.188092    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.188092    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.188092    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.189678    9488 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 22:35:47.190390    9488 api_server.go:141] control plane version: v1.30.3
	I0731 22:35:47.190435    9488 api_server.go:131] duration metric: took 12.5598ms to wait for apiserver health ...
	I0731 22:35:47.190435    9488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:35:47.334613    9488 request.go:629] Waited for 143.6815ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:47.334613    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:47.334613    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.334613    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.334613    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.343260    9488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 22:35:47.351103    9488 system_pods.go:59] 17 kube-system pods found
	I0731 22:35:47.351103    9488 system_pods.go:61] "coredns-7db6d8ff4d-76ftg" [bf92d1a7-935b-4c9a-b8bd-30ae3361df12] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "coredns-7db6d8ff4d-8xt8f" [df01f8c6-b706-4225-8470-1fbdf9828343] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "etcd-ha-207300" [e8d252ff-ddb3-4c99-a761-31c9c9f1b878] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "etcd-ha-207300-m02" [c3906bb1-a736-42d5-a6c5-2b2011e96095] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "kindnet-kz4x6" [7a9f0cc3-761c-43dc-8762-1adaff90efa2] Running
	I0731 22:35:47.351103    9488 system_pods.go:61] "kindnet-lmdqz" [9c96c91b-0a25-4cfd-be3a-5a843e9bed74] Running
	I0731 22:35:47.351619    9488 system_pods.go:61] "kube-apiserver-ha-207300" [eb0e0730-5fd4-41b6-8126-ab6e97ef3838] Running
	I0731 22:35:47.351741    9488 system_pods.go:61] "kube-apiserver-ha-207300-m02" [ed634f14-62de-4ec5-af02-8fbcb10ea3bf] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-controller-manager-ha-207300" [42d3dea7-1f64-4c4e-b700-eafb129dc8de] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-controller-manager-ha-207300-m02" [c630fba1-2a98-4176-aa73-c4dfc5602505] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-proxy-htmnf" [e5ac19af-40fc-448c-8c47-45bcff41ad20] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-proxy-z5gbs" [156fcdf2-9a4c-4f9b-bf4f-dfa2a48e3cbc] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-scheduler-ha-207300" [29ce7842-7630-492e-adcc-1cb0837afe4d] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-scheduler-ha-207300-m02" [5ce1215b-baaf-42e2-be58-4b8850ca3e9d] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-vip-ha-207300" [f8d305a0-e7ef-4336-9a79-0052678c97cd] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "kube-vip-ha-207300-m02" [47e8411b-e8ae-4561-95a9-b2957d56505b] Running
	I0731 22:35:47.351870    9488 system_pods.go:61] "storage-provisioner" [47da608c-5f75-43ea-8403-56b00ff33fd1] Running
	I0731 22:35:47.351870    9488 system_pods.go:74] duration metric: took 161.4325ms to wait for pod list to return data ...
	I0731 22:35:47.351870    9488 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:35:47.527068    9488 request.go:629] Waited for 175.1961ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:35:47.527068    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:35:47.527068    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.527068    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.527068    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.533253    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:35:47.533699    9488 default_sa.go:45] found service account: "default"
	I0731 22:35:47.533776    9488 default_sa.go:55] duration metric: took 181.9036ms for default service account to be created ...
	I0731 22:35:47.533776    9488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:35:47.739837    9488 request.go:629] Waited for 205.8422ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:47.739837    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:35:47.739837    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.739837    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.739837    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.747834    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:47.755035    9488 system_pods.go:86] 17 kube-system pods found
	I0731 22:35:47.755035    9488 system_pods.go:89] "coredns-7db6d8ff4d-76ftg" [bf92d1a7-935b-4c9a-b8bd-30ae3361df12] Running
	I0731 22:35:47.755572    9488 system_pods.go:89] "coredns-7db6d8ff4d-8xt8f" [df01f8c6-b706-4225-8470-1fbdf9828343] Running
	I0731 22:35:47.755572    9488 system_pods.go:89] "etcd-ha-207300" [e8d252ff-ddb3-4c99-a761-31c9c9f1b878] Running
	I0731 22:35:47.755572    9488 system_pods.go:89] "etcd-ha-207300-m02" [c3906bb1-a736-42d5-a6c5-2b2011e96095] Running
	I0731 22:35:47.755610    9488 system_pods.go:89] "kindnet-kz4x6" [7a9f0cc3-761c-43dc-8762-1adaff90efa2] Running
	I0731 22:35:47.755610    9488 system_pods.go:89] "kindnet-lmdqz" [9c96c91b-0a25-4cfd-be3a-5a843e9bed74] Running
	I0731 22:35:47.755610    9488 system_pods.go:89] "kube-apiserver-ha-207300" [eb0e0730-5fd4-41b6-8126-ab6e97ef3838] Running
	I0731 22:35:47.755610    9488 system_pods.go:89] "kube-apiserver-ha-207300-m02" [ed634f14-62de-4ec5-af02-8fbcb10ea3bf] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-controller-manager-ha-207300" [42d3dea7-1f64-4c4e-b700-eafb129dc8de] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-controller-manager-ha-207300-m02" [c630fba1-2a98-4176-aa73-c4dfc5602505] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-proxy-htmnf" [e5ac19af-40fc-448c-8c47-45bcff41ad20] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-proxy-z5gbs" [156fcdf2-9a4c-4f9b-bf4f-dfa2a48e3cbc] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-scheduler-ha-207300" [29ce7842-7630-492e-adcc-1cb0837afe4d] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-scheduler-ha-207300-m02" [5ce1215b-baaf-42e2-be58-4b8850ca3e9d] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-vip-ha-207300" [f8d305a0-e7ef-4336-9a79-0052678c97cd] Running
	I0731 22:35:47.755656    9488 system_pods.go:89] "kube-vip-ha-207300-m02" [47e8411b-e8ae-4561-95a9-b2957d56505b] Running
	I0731 22:35:47.755722    9488 system_pods.go:89] "storage-provisioner" [47da608c-5f75-43ea-8403-56b00ff33fd1] Running
	I0731 22:35:47.755722    9488 system_pods.go:126] duration metric: took 221.9437ms to wait for k8s-apps to be running ...
	I0731 22:35:47.755722    9488 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:35:47.768176    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:35:47.792677    9488 system_svc.go:56] duration metric: took 36.9545ms WaitForService to wait for kubelet
	I0731 22:35:47.792735    9488 kubeadm.go:582] duration metric: took 26.2676027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:35:47.792824    9488 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:35:47.941797    9488 request.go:629] Waited for 148.9113ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes
	I0731 22:35:47.941982    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes
	I0731 22:35:47.941982    9488 round_trippers.go:469] Request Headers:
	I0731 22:35:47.941982    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:35:47.941982    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:35:47.949390    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:35:47.950110    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:35:47.950110    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:35:47.950110    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:35:47.950110    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:35:47.950110    9488 node_conditions.go:105] duration metric: took 157.2847ms to run NodePressure ...
	I0731 22:35:47.950110    9488 start.go:241] waiting for startup goroutines ...
	I0731 22:35:47.950110    9488 start.go:255] writing updated cluster config ...
	I0731 22:35:47.955043    9488 out.go:177] 
	I0731 22:35:47.970457    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:35:47.970457    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:35:47.978709    9488 out.go:177] * Starting "ha-207300-m03" control-plane node in "ha-207300" cluster
	I0731 22:35:47.981408    9488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 22:35:47.981408    9488 cache.go:56] Caching tarball of preloaded images
	I0731 22:35:47.982106    9488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 22:35:47.982106    9488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 22:35:47.982106    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:35:47.988064    9488 start.go:360] acquireMachinesLock for ha-207300-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:35:47.988393    9488 start.go:364] duration metric: took 329.1µs to acquireMachinesLock for "ha-207300-m03"
	I0731 22:35:47.988713    9488 start.go:93] Provisioning new machine with config: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:35:47.988770    9488 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0731 22:35:47.992938    9488 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:35:47.993291    9488 start.go:159] libmachine.API.Create for "ha-207300" (driver="hyperv")
	I0731 22:35:47.993381    9488 client.go:168] LocalClient.Create starting
	I0731 22:35:47.993905    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 22:35:47.994188    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:35:47.994188    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:35:47.994402    9488 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 22:35:47.994673    9488 main.go:141] libmachine: Decoding PEM data...
	I0731 22:35:47.994749    9488 main.go:141] libmachine: Parsing certificate...
	I0731 22:35:47.994807    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 22:35:49.863156    9488 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 22:35:49.863156    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:35:49.863156    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 22:35:51.579446    9488 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 22:35:51.579446    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:35:51.579699    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:35:53.076871    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:35:53.076871    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:35:53.076960    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:35:56.795994    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:35:56.795994    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:35:56.798779    9488 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:35:57.240613    9488 main.go:141] libmachine: Creating SSH key...
	I0731 22:35:57.538184    9488 main.go:141] libmachine: Creating VM...
	I0731 22:35:57.539189    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 22:36:00.428942    9488 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 22:36:00.429009    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:00.429123    9488 main.go:141] libmachine: Using switch "Default Switch"
	I0731 22:36:00.429166    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 22:36:02.174101    9488 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 22:36:02.175149    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:02.175149    9488 main.go:141] libmachine: Creating VHD
	I0731 22:36:02.175287    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 22:36:05.954790    9488 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F8A42F50-25AA-47EC-8979-49537C925629
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 22:36:05.955779    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:05.955779    9488 main.go:141] libmachine: Writing magic tar header
	I0731 22:36:05.955779    9488 main.go:141] libmachine: Writing SSH key tar header
	I0731 22:36:05.970156    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 22:36:09.235871    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:09.235871    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:09.236528    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\disk.vhd' -SizeBytes 20000MB
	I0731 22:36:11.858569    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:11.858569    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:11.858569    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-207300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0731 22:36:15.517490    9488 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-207300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 22:36:15.517848    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:15.517988    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-207300-m03 -DynamicMemoryEnabled $false
	I0731 22:36:17.843170    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:17.844314    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:17.844395    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-207300-m03 -Count 2
	I0731 22:36:20.022845    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:20.022845    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:20.023166    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-207300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\boot2docker.iso'
	I0731 22:36:22.614596    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:22.615193    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:22.617253    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-207300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\disk.vhd'
	I0731 22:36:25.292881    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:25.293295    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:25.293295    9488 main.go:141] libmachine: Starting VM...
	I0731 22:36:25.293295    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-207300-m03
	I0731 22:36:28.387819    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:28.388125    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:28.388125    9488 main.go:141] libmachine: Waiting for host to start...
	I0731 22:36:28.388200    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:30.701611    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:30.701648    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:30.701648    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:33.271812    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:33.271812    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:34.282959    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:36.517316    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:36.517515    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:36.517515    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:39.077587    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:39.077587    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:40.091768    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:42.310468    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:42.311179    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:42.311262    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:44.845807    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:44.846235    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:45.851974    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:48.102801    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:48.102801    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:48.102801    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:50.758140    9488 main.go:141] libmachine: [stdout =====>] : 
	I0731 22:36:50.758174    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:51.768442    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:54.027028    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:54.027028    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:54.027028    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:36:56.606839    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:36:56.606839    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:56.606839    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:36:58.772984    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:36:58.773885    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:36:58.773885    9488 machine.go:94] provisionDockerMachine start ...
	I0731 22:36:58.774158    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:00.971846    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:00.971846    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:00.972000    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:03.523877    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:03.523877    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:03.531195    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:03.543089    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:03.543089    9488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:37:03.687934    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 22:37:03.688034    9488 buildroot.go:166] provisioning hostname "ha-207300-m03"
	I0731 22:37:03.688115    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:05.863860    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:05.863860    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:05.864050    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:08.467287    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:08.467287    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:08.474067    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:08.474203    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:08.474203    9488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-207300-m03 && echo "ha-207300-m03" | sudo tee /etc/hostname
	I0731 22:37:08.626461    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-207300-m03
	
	I0731 22:37:08.626461    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:10.764048    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:10.764048    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:10.764048    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:13.302675    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:13.302675    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:13.307869    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:13.308400    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:13.308400    9488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-207300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-207300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-207300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:37:13.452814    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:37:13.452814    9488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 22:37:13.452814    9488 buildroot.go:174] setting up certificates
	I0731 22:37:13.452814    9488 provision.go:84] configureAuth start
	I0731 22:37:13.452814    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:15.576665    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:15.577655    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:15.577778    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:18.122403    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:18.122531    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:18.122531    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:20.257824    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:20.258100    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:20.258192    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:22.832950    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:22.834390    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:22.834434    9488 provision.go:143] copyHostCerts
	I0731 22:37:22.834650    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 22:37:22.835178    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 22:37:22.835178    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 22:37:22.835715    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 22:37:22.837345    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 22:37:22.837631    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 22:37:22.837738    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 22:37:22.838156    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 22:37:22.839556    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 22:37:22.839947    9488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 22:37:22.840089    9488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 22:37:22.840611    9488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 22:37:22.841861    9488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-207300-m03 san=[127.0.0.1 172.17.27.253 ha-207300-m03 localhost minikube]
	I0731 22:37:22.938174    9488 provision.go:177] copyRemoteCerts
	I0731 22:37:22.955709    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:37:22.955709    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:25.094045    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:25.094548    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:25.094548    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:27.676229    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:27.676229    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:27.676229    9488 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:37:27.782588    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8268174s)
	I0731 22:37:27.782588    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 22:37:27.783365    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 22:37:27.829689    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 22:37:27.830096    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:37:27.873043    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 22:37:27.873362    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:37:27.915544    9488 provision.go:87] duration metric: took 14.4624758s to configureAuth
	I0731 22:37:27.915742    9488 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:37:27.916413    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:37:27.916477    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:30.048834    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:30.048834    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:30.049185    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:32.584705    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:32.584705    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:32.590486    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:32.591151    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:32.591151    9488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 22:37:32.708807    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 22:37:32.708940    9488 buildroot.go:70] root file system type: tmpfs
	I0731 22:37:32.709141    9488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 22:37:32.709141    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:34.838829    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:34.838829    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:34.839467    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:37.382578    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:37.382578    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:37.389343    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:37.390002    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:37.390002    9488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.21.92"
	Environment="NO_PROXY=172.17.21.92,172.17.28.136"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 22:37:37.535843    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.21.92
	Environment=NO_PROXY=172.17.21.92,172.17.28.136
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 22:37:37.536008    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:39.656878    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:39.657936    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:39.658019    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:42.178995    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:42.178995    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:42.185141    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:42.185957    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:42.185957    9488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 22:37:44.399048    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 22:37:44.399112    9488 machine.go:97] duration metric: took 45.6246476s to provisionDockerMachine
	I0731 22:37:44.399204    9488 client.go:171] duration metric: took 1m56.4042867s to LocalClient.Create
	I0731 22:37:44.399271    9488 start.go:167] duration metric: took 1m56.4045003s to libmachine.API.Create "ha-207300"
	I0731 22:37:44.399379    9488 start.go:293] postStartSetup for "ha-207300-m03" (driver="hyperv")
	I0731 22:37:44.399409    9488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:37:44.412864    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:37:44.412864    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:46.571406    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:46.571406    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:46.572480    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:49.101309    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:49.101309    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:49.102190    9488 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:37:49.208596    9488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.795671s)
	I0731 22:37:49.220887    9488 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:37:49.228395    9488 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:37:49.228395    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 22:37:49.228395    9488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 22:37:49.229805    9488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 22:37:49.229805    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 22:37:49.240654    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:37:49.259406    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 22:37:49.308389    9488 start.go:296] duration metric: took 4.9089182s for postStartSetup
	I0731 22:37:49.311498    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:51.461712    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:51.461712    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:51.461712    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:53.995671    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:53.995671    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:53.996412    9488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\config.json ...
	I0731 22:37:53.998927    9488 start.go:128] duration metric: took 2m6.008555s to createHost
	I0731 22:37:53.998927    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:37:56.127180    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:37:56.127180    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:56.127398    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:37:58.677704    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:37:58.678660    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:37:58.685210    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:37:58.685390    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:37:58.685390    9488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:37:58.812499    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465478.832201479
	
	I0731 22:37:58.812566    9488 fix.go:216] guest clock: 1722465478.832201479
	I0731 22:37:58.812566    9488 fix.go:229] Guest: 2024-07-31 22:37:58.832201479 +0000 UTC Remote: 2024-07-31 22:37:53.9989272 +0000 UTC m=+573.748040801 (delta=4.833274279s)
	I0731 22:37:58.812630    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:38:00.946426    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:00.946426    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:00.946675    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:03.484827    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:38:03.484935    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:03.490940    9488 main.go:141] libmachine: Using SSH client type: native
	I0731 22:38:03.491579    9488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.253 22 <nil> <nil>}
	I0731 22:38:03.491579    9488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722465478
	I0731 22:38:03.621316    9488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 22:37:58 UTC 2024
	
	I0731 22:38:03.621316    9488 fix.go:236] clock set: Wed Jul 31 22:37:58 UTC 2024
	 (err=<nil>)
	I0731 22:38:03.621316    9488 start.go:83] releasing machines lock for "ha-207300-m03", held for 2m15.6310507s
	I0731 22:38:03.621316    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:38:05.821864    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:05.821864    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:05.822223    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:08.377481    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:38:08.378625    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:08.381202    9488 out.go:177] * Found network options:
	I0731 22:38:08.384467    9488 out.go:177]   - NO_PROXY=172.17.21.92,172.17.28.136
	W0731 22:38:08.386829    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:38:08.387088    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:38:08.389187    9488 out.go:177]   - NO_PROXY=172.17.21.92,172.17.28.136
	W0731 22:38:08.391970    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:38:08.391970    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:38:08.392984    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:38:08.392984    9488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:38:08.395329    9488 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 22:38:08.395329    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:38:08.406380    9488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 22:38:08.406616    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:38:10.621828    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:10.621890    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:10.621890    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:10.638202    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:10.638202    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:10.638202    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:13.369428    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:38:13.369428    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:13.370049    9488 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:38:13.393750    9488 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:38:13.394091    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:13.394229    9488 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:38:13.463314    9488 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0568696s)
	W0731 22:38:13.463385    9488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:38:13.475831    9488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:38:13.480687    9488 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0852932s)
	W0731 22:38:13.480687    9488 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 22:38:13.513595    9488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:38:13.513595    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:38:13.513595    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:38:13.566572    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0731 22:38:13.579744    9488 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 22:38:13.579851    9488 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 22:38:13.604665    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 22:38:13.624648    9488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 22:38:13.635032    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 22:38:13.664996    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:38:13.696282    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 22:38:13.725313    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 22:38:13.759852    9488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:38:13.791303    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 22:38:13.822923    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 22:38:13.853010    9488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 22:38:13.886769    9488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:38:13.915493    9488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:38:13.948046    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:14.144913    9488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 22:38:14.175937    9488 start.go:495] detecting cgroup driver to use...
	I0731 22:38:14.187882    9488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 22:38:14.225220    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:38:14.259812    9488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:38:14.303454    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:38:14.341741    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:38:14.376954    9488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 22:38:14.440515    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 22:38:14.462651    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:38:14.506868    9488 ssh_runner.go:195] Run: which cri-dockerd
	I0731 22:38:14.525517    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 22:38:14.543985    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 22:38:14.587804    9488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 22:38:14.778295    9488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 22:38:14.959687    9488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 22:38:14.959687    9488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 22:38:15.008256    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:15.210518    9488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 22:38:17.804410    9488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5929023s)
	I0731 22:38:17.816430    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 22:38:17.853263    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:38:17.887154    9488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 22:38:18.078419    9488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 22:38:18.285769    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:18.481948    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 22:38:18.521357    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 22:38:18.556479    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:18.746795    9488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 22:38:18.851229    9488 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 22:38:18.863736    9488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 22:38:18.873357    9488 start.go:563] Will wait 60s for crictl version
	I0731 22:38:18.886310    9488 ssh_runner.go:195] Run: which crictl
	I0731 22:38:18.904085    9488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:38:18.953573    9488 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 22:38:18.966158    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:38:19.007385    9488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 22:38:19.049914    9488 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 22:38:19.052797    9488 out.go:177]   - env NO_PROXY=172.17.21.92
	I0731 22:38:19.055822    9488 out.go:177]   - env NO_PROXY=172.17.21.92,172.17.28.136
	I0731 22:38:19.058706    9488 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 22:38:19.063626    9488 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 22:38:19.063626    9488 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 22:38:19.063626    9488 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 22:38:19.064567    9488 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 22:38:19.069685    9488 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 22:38:19.069791    9488 ip.go:210] interface addr: 172.17.16.1/20
	I0731 22:38:19.081823    9488 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 22:38:19.087529    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:38:19.109003    9488 mustload.go:65] Loading cluster: ha-207300
	I0731 22:38:19.109356    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:38:19.111550    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:38:21.247915    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:21.248404    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:21.248404    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:38:21.249311    9488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300 for IP: 172.17.27.253
	I0731 22:38:21.249439    9488 certs.go:194] generating shared ca certs ...
	I0731 22:38:21.249439    9488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:38:21.250070    9488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 22:38:21.250509    9488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 22:38:21.250652    9488 certs.go:256] generating profile certs ...
	I0731 22:38:21.251523    9488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\client.key
	I0731 22:38:21.251646    9488 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.169207b5
	I0731 22:38:21.251728    9488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.169207b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.21.92 172.17.28.136 172.17.27.253 172.17.31.254]
	I0731 22:38:21.418588    9488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.169207b5 ...
	I0731 22:38:21.418588    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.169207b5: {Name:mka680cd694ec31b470fbdebbe35a08239b1d83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:38:21.420533    9488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.169207b5 ...
	I0731 22:38:21.420533    9488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.169207b5: {Name:mkb956b21f624b97c9b78796c61257e2a25e2069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:38:21.421539    9488 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt.169207b5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt
	I0731 22:38:21.434019    9488 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key.169207b5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key
	I0731 22:38:21.437185    9488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key
	I0731 22:38:21.437185    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:38:21.437185    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:38:21.438205    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:38:21.438491    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:38:21.438491    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:38:21.438714    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:38:21.438714    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:38:21.438714    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:38:21.439360    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 22:38:21.439658    9488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 22:38:21.439658    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 22:38:21.439966    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 22:38:21.440240    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 22:38:21.440482    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 22:38:21.440732    9488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 22:38:21.440732    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 22:38:21.441292    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 22:38:21.441459    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:38:21.441606    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:38:23.595247    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:23.596132    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:23.596132    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:26.149483    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:38:26.149483    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:26.149483    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:38:26.254263    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 22:38:26.262653    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 22:38:26.300121    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 22:38:26.307420    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0731 22:38:26.346166    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 22:38:26.351662    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 22:38:26.381697    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 22:38:26.388772    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 22:38:26.419530    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 22:38:26.425804    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 22:38:26.458978    9488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 22:38:26.465916    9488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 22:38:26.484999    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:38:26.536469    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:38:26.580256    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:38:26.622999    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 22:38:26.666978    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 22:38:26.713161    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 22:38:26.761396    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:38:26.806890    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-207300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:38:26.856170    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 22:38:26.905475    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 22:38:26.951448    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:38:26.995855    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 22:38:27.027531    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0731 22:38:27.063394    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 22:38:27.094437    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 22:38:27.125319    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 22:38:27.164774    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 22:38:27.195641    9488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 22:38:27.239891    9488 ssh_runner.go:195] Run: openssl version
	I0731 22:38:27.258647    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:38:27.288264    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:38:27.295030    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:38:27.306904    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:38:27.327832    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:38:27.359037    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 22:38:27.390179    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 22:38:27.396470    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 22:38:27.408404    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 22:38:27.427323    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 22:38:27.458151    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 22:38:27.489304    9488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 22:38:27.495886    9488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 22:38:27.509620    9488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 22:38:27.530754    9488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:38:27.562328    9488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:38:27.568520    9488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:38:27.568838    9488 kubeadm.go:934] updating node {m03 172.17.27.253 8443 v1.30.3 docker true true} ...
	I0731 22:38:27.569062    9488 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-207300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.27.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:38:27.569062    9488 kube-vip.go:115] generating kube-vip config ...
	I0731 22:38:27.580713    9488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:38:27.605507    9488 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:38:27.605638    9488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:38:27.620004    9488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:38:27.638642    9488 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 22:38:27.650989    9488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 22:38:27.670987    9488 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 22:38:27.670987    9488 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 22:38:27.670987    9488 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 22:38:27.670987    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:38:27.670987    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:38:27.686681    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:38:27.687887    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:38:27.687887    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:38:27.708794    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 22:38:27.708794    9488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:38:27.708871    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 22:38:27.708996    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 22:38:27.709044    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 22:38:27.720931    9488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:38:27.766557    9488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 22:38:27.766890    9488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 22:38:29.072388    9488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 22:38:29.090786    9488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 22:38:29.124611    9488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:38:29.154800    9488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0731 22:38:29.202884    9488 ssh_runner.go:195] Run: grep 172.17.31.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:38:29.209804    9488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:38:29.250407    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:38:29.450088    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:38:29.481244    9488 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:38:29.482337    9488 start.go:317] joinCluster: &{Name:ha-207300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-207300 Namespace:default APIServerHAVIP:172.17.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.21.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.27.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:38:29.482728    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 22:38:29.482844    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:38:31.641168    9488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:38:31.641742    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:31.641843    9488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:38:34.242737    9488 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:38:34.242819    9488 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:38:34.243999    9488 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:38:34.456390    9488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9735989s)
	I0731 22:38:34.456503    9488 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.27.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:38:34.456604    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w383sl.wdrouoc2exvlz1v1 --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-207300-m03 --control-plane --apiserver-advertise-address=172.17.27.253 --apiserver-bind-port=8443"
	I0731 22:39:17.459471    9488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token w383sl.wdrouoc2exvlz1v1 --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-207300-m03 --control-plane --apiserver-advertise-address=172.17.27.253 --apiserver-bind-port=8443": (43.0022533s)
	I0731 22:39:17.459621    9488 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 22:39:18.339866    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-207300-m03 minikube.k8s.io/updated_at=2024_07_31T22_39_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-207300 minikube.k8s.io/primary=false
	I0731 22:39:18.532242    9488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-207300-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 22:39:18.678788    9488 start.go:319] duration metric: took 49.1958265s to joinCluster
	I0731 22:39:18.678986    9488 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.17.27.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 22:39:18.680028    9488 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:39:18.681607    9488 out.go:177] * Verifying Kubernetes components...
	I0731 22:39:18.698250    9488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:39:19.080298    9488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:39:19.111942    9488 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:39:19.113598    9488 kapi.go:59] client config for ha-207300: &rest.Config{Host:"https://172.17.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-207300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 22:39:19.113652    9488 kubeadm.go:483] Overriding stale ClientConfig host https://172.17.31.254:8443 with https://172.17.21.92:8443
	I0731 22:39:19.115857    9488 node_ready.go:35] waiting up to 6m0s for node "ha-207300-m03" to be "Ready" ...
	I0731 22:39:19.116058    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:19.116130    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:19.116130    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:19.116130    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:19.132504    9488 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 22:39:19.620429    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:19.620429    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:19.620429    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:19.620429    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:19.633133    9488 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 22:39:20.129513    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:20.129513    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:20.129513    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:20.129513    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:20.138192    9488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 22:39:20.621713    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:20.622001    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:20.622001    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:20.622001    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:20.715209    9488 round_trippers.go:574] Response Status: 200 OK in 93 milliseconds
	I0731 22:39:21.127693    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:21.127693    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:21.127693    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:21.127693    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:21.132288    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:21.133294    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:21.617086    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:21.617163    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:21.617163    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:21.617163    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:21.622694    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:22.120096    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:22.120096    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:22.120096    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:22.120096    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:22.252692    9488 round_trippers.go:574] Response Status: 200 OK in 132 milliseconds
	I0731 22:39:22.625952    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:22.625952    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:22.625952    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:22.625952    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:22.633696    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:39:23.130357    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:23.130357    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:23.130357    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:23.130357    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:23.145235    9488 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 22:39:23.146868    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:23.618945    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:23.618945    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:23.619012    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:23.619012    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:23.623268    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:24.124224    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:24.124224    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:24.124224    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:24.124224    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:24.127803    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:24.628644    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:24.628644    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:24.628644    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:24.628644    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:24.633288    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:25.128649    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:25.128795    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:25.128852    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:25.128852    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:25.135749    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:39:25.625537    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:25.625654    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:25.625654    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:25.625654    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:25.629252    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:25.631152    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:26.129280    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:26.129340    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:26.129340    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:26.129340    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:26.132504    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:26.629343    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:26.629343    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:26.629343    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:26.629343    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:26.635204    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:27.117728    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:27.117728    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:27.117809    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:27.117809    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:27.127153    9488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 22:39:27.617010    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:27.617270    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:27.617270    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:27.617270    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:27.629856    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:39:27.631313    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:28.119711    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:28.119816    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:28.119905    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:28.119905    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:28.124748    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:28.618412    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:28.618412    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:28.618412    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:28.618412    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:28.623053    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:29.118745    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:29.118839    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:29.118839    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:29.118839    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:29.123638    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:29.632011    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:29.632011    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:29.632126    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:29.632126    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:29.641429    9488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 22:39:29.642414    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:30.117775    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:30.117775    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:30.117775    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:30.117775    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:30.122413    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:30.617932    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:30.618237    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:30.618237    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:30.618237    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:30.623319    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:31.123580    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:31.123784    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:31.123784    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:31.123784    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:31.128649    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:31.625599    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:31.625788    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:31.625788    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:31.625788    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:31.630589    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:32.128889    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:32.129140    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:32.129140    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:32.129140    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:32.134517    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:32.136207    9488 node_ready.go:53] node "ha-207300-m03" has status "Ready":"False"
	I0731 22:39:32.626966    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:32.626966    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:32.626966    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:32.627058    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:32.632680    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:33.128463    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:33.128579    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:33.128742    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:33.128742    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:33.133922    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:33.628254    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:33.628365    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:33.628365    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:33.628365    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:33.634665    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:39:34.127530    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:34.127530    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.127530    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.127530    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.132579    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:34.133286    9488 node_ready.go:49] node "ha-207300-m03" has status "Ready":"True"
	I0731 22:39:34.133286    9488 node_ready.go:38] duration metric: took 15.0172036s for node "ha-207300-m03" to be "Ready" ...
	I0731 22:39:34.133286    9488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:39:34.133286    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:34.133286    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.133286    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.133286    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.146011    9488 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 22:39:34.155953    9488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.156945    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76ftg
	I0731 22:39:34.156945    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.156945    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.156945    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.160664    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.161629    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.161629    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.161629    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.161629    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.165217    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.165217    9488 pod_ready.go:92] pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.166210    9488 pod_ready.go:81] duration metric: took 10.2568ms for pod "coredns-7db6d8ff4d-76ftg" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.166210    9488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.166210    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8xt8f
	I0731 22:39:34.166210    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.166210    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.166210    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.170253    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:34.171158    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.171158    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.171158    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.171158    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.174291    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.175226    9488 pod_ready.go:92] pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.175226    9488 pod_ready.go:81] duration metric: took 9.0159ms for pod "coredns-7db6d8ff4d-8xt8f" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.175226    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.175226    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300
	I0731 22:39:34.175226    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.175226    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.175226    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.179818    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:34.181230    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.181288    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.181325    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.181325    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.185632    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.186694    9488 pod_ready.go:92] pod "etcd-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.186694    9488 pod_ready.go:81] duration metric: took 11.4682ms for pod "etcd-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.186694    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.186694    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300-m02
	I0731 22:39:34.186694    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.186694    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.186694    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.191645    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:34.193290    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:34.193376    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.193442    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.193503    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.197138    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:34.197138    9488 pod_ready.go:92] pod "etcd-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.197138    9488 pod_ready.go:81] duration metric: took 10.4435ms for pod "etcd-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.197138    9488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.333237    9488 request.go:629] Waited for 136.0974ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300-m03
	I0731 22:39:34.333435    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/etcd-ha-207300-m03
	I0731 22:39:34.333435    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.333435    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.333435    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.341661    9488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 22:39:34.536883    9488 request.go:629] Waited for 194.2327ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:34.537198    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:34.537198    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.537198    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.537198    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.542244    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:34.543619    9488 pod_ready.go:92] pod "etcd-ha-207300-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.543619    9488 pod_ready.go:81] duration metric: took 346.4766ms for pod "etcd-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.543619    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.742359    9488 request.go:629] Waited for 198.5582ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300
	I0731 22:39:34.742561    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300
	I0731 22:39:34.742561    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.742561    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.742629    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.759289    9488 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 22:39:34.933114    9488 request.go:629] Waited for 172.1937ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.933253    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:34.933253    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:34.933253    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:34.933253    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:34.937836    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:34.939379    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:34.939439    9488 pod_ready.go:81] duration metric: took 395.8152ms for pod "kube-apiserver-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:34.939439    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.137645    9488 request.go:629] Waited for 198.0637ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m02
	I0731 22:39:35.138139    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m02
	I0731 22:39:35.138198    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.138198    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.138198    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.143569    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:35.338798    9488 request.go:629] Waited for 193.894ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:35.339097    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:35.339254    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.339254    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.339254    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.346825    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:39:35.347552    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:35.347552    9488 pod_ready.go:81] duration metric: took 408.1079ms for pod "kube-apiserver-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.347552    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.541780    9488 request.go:629] Waited for 193.941ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m03
	I0731 22:39:35.541904    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-207300-m03
	I0731 22:39:35.541904    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.541904    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.542049    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.548087    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:39:35.731665    9488 request.go:629] Waited for 181.8585ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:35.731796    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:35.731916    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.732007    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.732033    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.736421    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:35.737457    9488 pod_ready.go:92] pod "kube-apiserver-ha-207300-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:35.737457    9488 pod_ready.go:81] duration metric: took 389.8998ms for pod "kube-apiserver-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.737457    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:35.935888    9488 request.go:629] Waited for 198.1885ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300
	I0731 22:39:35.936120    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300
	I0731 22:39:35.936120    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:35.936120    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:35.936120    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:35.940271    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:36.137971    9488 request.go:629] Waited for 195.6006ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:36.137971    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:36.137971    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.137971    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.138187    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.143854    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:36.144642    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:36.144642    9488 pod_ready.go:81] duration metric: took 407.06ms for pod "kube-controller-manager-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.144642    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.341284    9488 request.go:629] Waited for 196.5075ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m02
	I0731 22:39:36.341455    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m02
	I0731 22:39:36.341455    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.341501    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.341501    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.346279    9488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:39:36.528962    9488 request.go:629] Waited for 181.3838ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:36.528962    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:36.528962    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.528962    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.528962    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.533545    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:36.534765    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:36.534765    9488 pod_ready.go:81] duration metric: took 390.1178ms for pod "kube-controller-manager-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.534765    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.732225    9488 request.go:629] Waited for 197.4573ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m03
	I0731 22:39:36.732581    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-207300-m03
	I0731 22:39:36.732581    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.732581    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.732671    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.738437    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:36.937609    9488 request.go:629] Waited for 197.9142ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:36.937609    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:36.937609    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:36.937609    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:36.937609    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:36.946603    9488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 22:39:36.947332    9488 pod_ready.go:92] pod "kube-controller-manager-ha-207300-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:36.947332    9488 pod_ready.go:81] duration metric: took 412.5613ms for pod "kube-controller-manager-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:36.947332    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f56f" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.140292    9488 request.go:629] Waited for 192.7351ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f56f
	I0731 22:39:37.140292    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f56f
	I0731 22:39:37.140292    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.140292    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.140542    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.145859    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:37.328398    9488 request.go:629] Waited for 181.4529ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:37.328714    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:37.328833    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.328833    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.328896    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.340618    9488 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 22:39:37.341571    9488 pod_ready.go:92] pod "kube-proxy-2f56f" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:37.341571    9488 pod_ready.go:81] duration metric: took 394.234ms for pod "kube-proxy-2f56f" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.341571    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-htmnf" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.531630    9488 request.go:629] Waited for 189.8312ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-htmnf
	I0731 22:39:37.531884    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-htmnf
	I0731 22:39:37.531884    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.531884    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.531884    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.537303    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:37.733756    9488 request.go:629] Waited for 195.3274ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:37.733863    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:37.733863    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.734094    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.734094    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.738241    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:37.739151    9488 pod_ready.go:92] pod "kube-proxy-htmnf" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:37.739151    9488 pod_ready.go:81] duration metric: took 397.5752ms for pod "kube-proxy-htmnf" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.739151    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5gbs" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:37.936557    9488 request.go:629] Waited for 197.4037ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5gbs
	I0731 22:39:37.936838    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z5gbs
	I0731 22:39:37.936838    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:37.936838    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:37.936838    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:37.941422    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:38.137561    9488 request.go:629] Waited for 194.978ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:38.137835    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:38.137835    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.137835    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.137835    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.145212    9488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:39:38.146483    9488 pod_ready.go:92] pod "kube-proxy-z5gbs" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:38.146586    9488 pod_ready.go:81] duration metric: took 407.4303ms for pod "kube-proxy-z5gbs" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.146586    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.341175    9488 request.go:629] Waited for 194.0369ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300
	I0731 22:39:38.341283    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300
	I0731 22:39:38.341283    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.341283    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.341283    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.346235    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:38.528846    9488 request.go:629] Waited for 180.9738ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:38.528846    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300
	I0731 22:39:38.529143    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.529143    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.529143    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.534917    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:38.535572    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:38.535572    9488 pod_ready.go:81] duration metric: took 388.9805ms for pod "kube-scheduler-ha-207300" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.535572    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.730747    9488 request.go:629] Waited for 194.0529ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m02
	I0731 22:39:38.730747    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m02
	I0731 22:39:38.730747    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.730747    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.730747    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.735375    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:38.935222    9488 request.go:629] Waited for 197.6597ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:38.935522    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m02
	I0731 22:39:38.935635    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:38.935635    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:38.935635    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:38.939916    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:38.940983    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:38.941108    9488 pod_ready.go:81] duration metric: took 405.5311ms for pod "kube-scheduler-ha-207300-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:38.941108    9488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:39.137252    9488 request.go:629] Waited for 196.0368ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m03
	I0731 22:39:39.137662    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-207300-m03
	I0731 22:39:39.137662    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.137662    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.137865    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.142197    9488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:39:39.339962    9488 request.go:629] Waited for 195.0352ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:39.340078    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes/ha-207300-m03
	I0731 22:39:39.340078    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.340167    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.340238    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.346612    9488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:39:39.347813    9488 pod_ready.go:92] pod "kube-scheduler-ha-207300-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:39:39.347813    9488 pod_ready.go:81] duration metric: took 406.6999ms for pod "kube-scheduler-ha-207300-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:39:39.347813    9488 pod_ready.go:38] duration metric: took 5.2144606s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:39:39.347813    9488 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:39:39.359568    9488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:39:39.387357    9488 api_server.go:72] duration metric: took 20.7079866s to wait for apiserver process to appear ...
	I0731 22:39:39.387357    9488 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:39:39.387465    9488 api_server.go:253] Checking apiserver healthz at https://172.17.21.92:8443/healthz ...
	I0731 22:39:39.394573    9488 api_server.go:279] https://172.17.21.92:8443/healthz returned 200:
	ok
	I0731 22:39:39.394573    9488 round_trippers.go:463] GET https://172.17.21.92:8443/version
	I0731 22:39:39.394573    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.394573    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.398317    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.399123    9488 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 22:39:39.400201    9488 api_server.go:141] control plane version: v1.30.3
	I0731 22:39:39.400201    9488 api_server.go:131] duration metric: took 12.8437ms to wait for apiserver health ...
	I0731 22:39:39.400201    9488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:39:39.541809    9488 request.go:629] Waited for 141.3817ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:39.541985    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:39.541985    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.541985    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.541985    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.552569    9488 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 22:39:39.562550    9488 system_pods.go:59] 24 kube-system pods found
	I0731 22:39:39.562597    9488 system_pods.go:61] "coredns-7db6d8ff4d-76ftg" [bf92d1a7-935b-4c9a-b8bd-30ae3361df12] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "coredns-7db6d8ff4d-8xt8f" [df01f8c6-b706-4225-8470-1fbdf9828343] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "etcd-ha-207300" [e8d252ff-ddb3-4c99-a761-31c9c9f1b878] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "etcd-ha-207300-m02" [c3906bb1-a736-42d5-a6c5-2b2011e96095] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "etcd-ha-207300-m03" [93daa58c-b243-42a4-bb99-041cbc686b58] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kindnet-kz4x6" [7a9f0cc3-761c-43dc-8762-1adaff90efa2] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kindnet-lmdqz" [9c96c91b-0a25-4cfd-be3a-5a843e9bed74] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kindnet-x9ppc" [14752388-ec95-431d-80c6-86e6c4fd1c14] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-apiserver-ha-207300" [eb0e0730-5fd4-41b6-8126-ab6e97ef3838] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-apiserver-ha-207300-m02" [ed634f14-62de-4ec5-af02-8fbcb10ea3bf] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-apiserver-ha-207300-m03" [45d4ac2d-f672-4bce-8d5a-f5d7b246b58c] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-controller-manager-ha-207300" [42d3dea7-1f64-4c4e-b700-eafb129dc8de] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-controller-manager-ha-207300-m02" [c630fba1-2a98-4176-aa73-c4dfc5602505] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-controller-manager-ha-207300-m03" [88e8a610-6178-4caf-9860-0a24b17386f5] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-proxy-2f56f" [045dbfdd-d6ef-4224-a868-0a71d78c2345] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-proxy-htmnf" [e5ac19af-40fc-448c-8c47-45bcff41ad20] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-proxy-z5gbs" [156fcdf2-9a4c-4f9b-bf4f-dfa2a48e3cbc] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-scheduler-ha-207300" [29ce7842-7630-492e-adcc-1cb0837afe4d] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-scheduler-ha-207300-m02" [5ce1215b-baaf-42e2-be58-4b8850ca3e9d] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-scheduler-ha-207300-m03" [857cc362-2f33-4e60-a7ed-bb207cd5b4b7] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-vip-ha-207300" [f8d305a0-e7ef-4336-9a79-0052678c97cd] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-vip-ha-207300-m02" [47e8411b-e8ae-4561-95a9-b2957d56505b] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "kube-vip-ha-207300-m03" [d257808d-8954-4ca1-b3d7-b81468bf17df] Running
	I0731 22:39:39.562597    9488 system_pods.go:61] "storage-provisioner" [47da608c-5f75-43ea-8403-56b00ff33fd1] Running
	I0731 22:39:39.562597    9488 system_pods.go:74] duration metric: took 162.3937ms to wait for pod list to return data ...
	I0731 22:39:39.562597    9488 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:39:39.729798    9488 request.go:629] Waited for 167.0073ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:39:39.729798    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:39:39.729798    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.729798    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.729798    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.735635    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:39.735971    9488 default_sa.go:45] found service account: "default"
	I0731 22:39:39.735971    9488 default_sa.go:55] duration metric: took 173.3722ms for default service account to be created ...
	I0731 22:39:39.735971    9488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:39:39.933172    9488 request.go:629] Waited for 196.9096ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:39.933309    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/namespaces/kube-system/pods
	I0731 22:39:39.933309    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:39.933309    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:39.933309    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:39.944637    9488 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 22:39:39.954327    9488 system_pods.go:86] 24 kube-system pods found
	I0731 22:39:39.954327    9488 system_pods.go:89] "coredns-7db6d8ff4d-76ftg" [bf92d1a7-935b-4c9a-b8bd-30ae3361df12] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "coredns-7db6d8ff4d-8xt8f" [df01f8c6-b706-4225-8470-1fbdf9828343] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "etcd-ha-207300" [e8d252ff-ddb3-4c99-a761-31c9c9f1b878] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "etcd-ha-207300-m02" [c3906bb1-a736-42d5-a6c5-2b2011e96095] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "etcd-ha-207300-m03" [93daa58c-b243-42a4-bb99-041cbc686b58] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kindnet-kz4x6" [7a9f0cc3-761c-43dc-8762-1adaff90efa2] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kindnet-lmdqz" [9c96c91b-0a25-4cfd-be3a-5a843e9bed74] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kindnet-x9ppc" [14752388-ec95-431d-80c6-86e6c4fd1c14] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-apiserver-ha-207300" [eb0e0730-5fd4-41b6-8126-ab6e97ef3838] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-apiserver-ha-207300-m02" [ed634f14-62de-4ec5-af02-8fbcb10ea3bf] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-apiserver-ha-207300-m03" [45d4ac2d-f672-4bce-8d5a-f5d7b246b58c] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-controller-manager-ha-207300" [42d3dea7-1f64-4c4e-b700-eafb129dc8de] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-controller-manager-ha-207300-m02" [c630fba1-2a98-4176-aa73-c4dfc5602505] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-controller-manager-ha-207300-m03" [88e8a610-6178-4caf-9860-0a24b17386f5] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-proxy-2f56f" [045dbfdd-d6ef-4224-a868-0a71d78c2345] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-proxy-htmnf" [e5ac19af-40fc-448c-8c47-45bcff41ad20] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-proxy-z5gbs" [156fcdf2-9a4c-4f9b-bf4f-dfa2a48e3cbc] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-scheduler-ha-207300" [29ce7842-7630-492e-adcc-1cb0837afe4d] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-scheduler-ha-207300-m02" [5ce1215b-baaf-42e2-be58-4b8850ca3e9d] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-scheduler-ha-207300-m03" [857cc362-2f33-4e60-a7ed-bb207cd5b4b7] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-vip-ha-207300" [f8d305a0-e7ef-4336-9a79-0052678c97cd] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-vip-ha-207300-m02" [47e8411b-e8ae-4561-95a9-b2957d56505b] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "kube-vip-ha-207300-m03" [d257808d-8954-4ca1-b3d7-b81468bf17df] Running
	I0731 22:39:39.954327    9488 system_pods.go:89] "storage-provisioner" [47da608c-5f75-43ea-8403-56b00ff33fd1] Running
	I0731 22:39:39.954857    9488 system_pods.go:126] duration metric: took 218.8064ms to wait for k8s-apps to be running ...
	I0731 22:39:39.954857    9488 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:39:39.964348    9488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:39:39.996098    9488 system_svc.go:56] duration metric: took 41.2407ms WaitForService to wait for kubelet
	I0731 22:39:39.996098    9488 kubeadm.go:582] duration metric: took 21.3167196s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:39:39.996098    9488 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:39:40.136596    9488 request.go:629] Waited for 140.116ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.21.92:8443/api/v1/nodes
	I0731 22:39:40.136804    9488 round_trippers.go:463] GET https://172.17.21.92:8443/api/v1/nodes
	I0731 22:39:40.136804    9488 round_trippers.go:469] Request Headers:
	I0731 22:39:40.136804    9488 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:39:40.136804    9488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 22:39:40.141855    9488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:39:40.144256    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:39:40.144256    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:39:40.144328    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:39:40.144328    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:39:40.144328    9488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:39:40.144328    9488 node_conditions.go:123] node cpu capacity is 2
	I0731 22:39:40.144328    9488 node_conditions.go:105] duration metric: took 148.2275ms to run NodePressure ...
	I0731 22:39:40.144328    9488 start.go:241] waiting for startup goroutines ...
	I0731 22:39:40.144394    9488 start.go:255] writing updated cluster config ...
	I0731 22:39:40.156435    9488 ssh_runner.go:195] Run: rm -f paused
	I0731 22:39:40.305852    9488 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 22:39:40.309434    9488 out.go:177] * Done! kubectl is now configured to use "ha-207300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 31 22:31:56 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:31:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/383ca7ed078722c5076713b3759129562417aca4629178d90d94bf59407c308a/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 22:31:56 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:31:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ec30e750851512648397310d12de83abfcf8dfec70209ed81809a468cb758c0/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 22:31:56 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:31:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/685a24f7e87194d87281943fc543bcd38c32457da023a59b9272abcf739ddc96/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 22:31:56 ha-207300 dockerd[1435]: time="2024-07-31T22:31:56.928183289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:31:56 ha-207300 dockerd[1435]: time="2024-07-31T22:31:56.933141822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:31:56 ha-207300 dockerd[1435]: time="2024-07-31T22:31:56.933263823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:56 ha-207300 dockerd[1435]: time="2024-07-31T22:31:56.933434824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.366302985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.366509985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.366532285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.368686991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.380440222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.381804125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.382035826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:31:57 ha-207300 dockerd[1435]: time="2024-07-31T22:31:57.382477027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:40:19 ha-207300 dockerd[1435]: time="2024-07-31T22:40:19.059737265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:40:19 ha-207300 dockerd[1435]: time="2024-07-31T22:40:19.060041767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:40:19 ha-207300 dockerd[1435]: time="2024-07-31T22:40:19.060179167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:40:19 ha-207300 dockerd[1435]: time="2024-07-31T22:40:19.060527369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:40:19 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:40:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/038d12e18eb5082f79f6a7f22b64a43502a4b0b9609b391d78911bb2dba52ec0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 22:40:20 ha-207300 cri-dockerd[1324]: time="2024-07-31T22:40:20Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 31 22:40:20 ha-207300 dockerd[1435]: time="2024-07-31T22:40:20.973859815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 22:40:20 ha-207300 dockerd[1435]: time="2024-07-31T22:40:20.974011117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 22:40:20 ha-207300 dockerd[1435]: time="2024-07-31T22:40:20.974032017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 22:40:20 ha-207300 dockerd[1435]: time="2024-07-31T22:40:20.974723426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39b3a643e1150       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   038d12e18eb50       busybox-fc5497c4f-dmsjq
	ef2b9187dc7ad       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   685a24f7e8719       coredns-7db6d8ff4d-76ftg
	aee85563f6da1       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   5ec30e7508515       coredns-7db6d8ff4d-8xt8f
	9a35498ccbc6f       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   383ca7ed07872       storage-provisioner
	1aa0807dc075f       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              27 minutes ago      Running             kindnet-cni               0                   f2cb14db0f72d       kindnet-lmdqz
	76a17591c6fac       55bb025d2cfa5                                                                                         27 minutes ago      Running             kube-proxy                0                   c618b03095696       kube-proxy-z5gbs
	2994dd0871403       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   b8f8ab975dd56       kube-vip-ha-207300
	23266576b86cf       76932a3b37d7e                                                                                         27 minutes ago      Running             kube-controller-manager   0                   f41b2b390e4a3       kube-controller-manager-ha-207300
	ca42a9c8944b7       1f6d574d502f3                                                                                         27 minutes ago      Running             kube-apiserver            0                   43cc4ea2f8d23       kube-apiserver-ha-207300
	72d884b0f8834       3edc18e7b7672                                                                                         27 minutes ago      Running             kube-scheduler            0                   0b7f062808ba1       kube-scheduler-ha-207300
	f98bfdd5c1907       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   5451ccaff78bc       etcd-ha-207300
	
	
	==> coredns [aee85563f6da] <==
	[INFO] 10.244.1.2:60050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166802s
	[INFO] 10.244.0.4:44272 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158902s
	[INFO] 10.244.0.4:55167 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137902s
	[INFO] 10.244.0.4:55952 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146902s
	[INFO] 10.244.0.4:35327 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000252103s
	[INFO] 10.244.0.4:43599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126001s
	[INFO] 10.244.2.2:60189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115702s
	[INFO] 10.244.2.2:49019 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134702s
	[INFO] 10.244.2.2:43833 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081301s
	[INFO] 10.244.2.2:37834 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130701s
	[INFO] 10.244.1.2:37113 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167603s
	[INFO] 10.244.1.2:48182 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123902s
	[INFO] 10.244.1.2:36265 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012707856s
	[INFO] 10.244.1.2:54993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129802s
	[INFO] 10.244.1.2:39553 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165102s
	[INFO] 10.244.1.2:37452 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065401s
	[INFO] 10.244.0.4:55954 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090201s
	[INFO] 10.244.1.2:49247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172202s
	[INFO] 10.244.1.2:58188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082401s
	[INFO] 10.244.1.2:45588 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082901s
	[INFO] 10.244.0.4:39075 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123501s
	[INFO] 10.244.0.4:40567 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000151102s
	[INFO] 10.244.2.2:56575 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120602s
	[INFO] 10.244.2.2:46069 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000059001s
	[INFO] 10.244.1.2:57058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239503s
	
	
	==> coredns [ef2b9187dc7a] <==
	[INFO] 10.244.1.2:36380 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.001019912s
	[INFO] 10.244.0.4:45601 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032407098s
	[INFO] 10.244.0.4:56553 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234803s
	[INFO] 10.244.0.4:41356 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.005240664s
	[INFO] 10.244.2.2:52594 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073601s
	[INFO] 10.244.2.2:51267 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129201s
	[INFO] 10.244.2.2:42341 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000051701s
	[INFO] 10.244.2.2:46960 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166702s
	[INFO] 10.244.1.2:45426 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000143802s
	[INFO] 10.244.1.2:47990 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152101s
	[INFO] 10.244.0.4:43210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179602s
	[INFO] 10.244.0.4:59126 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000236803s
	[INFO] 10.244.0.4:46953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000332004s
	[INFO] 10.244.2.2:47159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189603s
	[INFO] 10.244.2.2:58078 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000628s
	[INFO] 10.244.2.2:50910 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000385904s
	[INFO] 10.244.2.2:45683 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079501s
	[INFO] 10.244.1.2:42810 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149602s
	[INFO] 10.244.0.4:54879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000376804s
	[INFO] 10.244.0.4:40853 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295304s
	[INFO] 10.244.2.2:48750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129201s
	[INFO] 10.244.2.2:45748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106901s
	[INFO] 10.244.1.2:34395 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177802s
	[INFO] 10.244.1.2:43660 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000535s
	[INFO] 10.244.1.2:42514 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000090301s
	
	
	==> describe nodes <==
	Name:               ha-207300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-207300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-207300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T22_31_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:31:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-207300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:58:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:55:41 +0000   Wed, 31 Jul 2024 22:31:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:55:41 +0000   Wed, 31 Jul 2024 22:31:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:55:41 +0000   Wed, 31 Jul 2024 22:31:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:55:41 +0000   Wed, 31 Jul 2024 22:31:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.21.92
	  Hostname:    ha-207300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a148e76579a04c519b4c19b001798bd3
	  System UUID:                960376a8-fd40-614d-a948-5e6e5b08529e
	  Boot ID:                    6fcae202-face-4e62-bb79-1aea3b1cf7da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dmsjq              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-76ftg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-8xt8f             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-207300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-lmdqz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-207300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-207300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-z5gbs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-207300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-207300                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-207300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-207300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-207300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m   node-controller  Node ha-207300 event: Registered Node ha-207300 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-207300 status is now: NodeReady
	  Normal  RegisteredNode           23m   node-controller  Node ha-207300 event: Registered Node ha-207300 in Controller
	  Normal  RegisteredNode           19m   node-controller  Node ha-207300 event: Registered Node ha-207300 in Controller
	
	
	Name:               ha-207300-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-207300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-207300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_35_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:35:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-207300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:57:02 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 22:56:10 +0000   Wed, 31 Jul 2024 22:57:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 22:56:10 +0000   Wed, 31 Jul 2024 22:57:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 22:56:10 +0000   Wed, 31 Jul 2024 22:57:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 22:56:10 +0000   Wed, 31 Jul 2024 22:57:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.28.136
	  Hostname:    ha-207300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb62ce90f3be47b4a23f24eb61648c2a
	  System UUID:                ecc49eb4-b0e3-e647-bbee-85d7ecde0688
	  Boot ID:                    7e79ea17-4642-4c4c-acdb-eb562d08a26f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-x7dnz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-207300-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-kz4x6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-207300-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-207300-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-htmnf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-207300-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-207300-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-207300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-207300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-207300-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m                node-controller  Node ha-207300-m02 event: Registered Node ha-207300-m02 in Controller
	  Normal  RegisteredNode           23m                node-controller  Node ha-207300-m02 event: Registered Node ha-207300-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-207300-m02 event: Registered Node ha-207300-m02 in Controller
	  Normal  NodeNotReady             60s                node-controller  Node ha-207300-m02 status is now: NodeNotReady
	
	
	Name:               ha-207300-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-207300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-207300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_39_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:39:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-207300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:58:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:55:59 +0000   Wed, 31 Jul 2024 22:39:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:55:59 +0000   Wed, 31 Jul 2024 22:39:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:55:59 +0000   Wed, 31 Jul 2024 22:39:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:55:59 +0000   Wed, 31 Jul 2024 22:39:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.27.253
	  Hostname:    ha-207300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3aded06a5a224beeabb0709882901395
	  System UUID:                79b9d085-c6d3-e54b-af08-0e582d6afc79
	  Boot ID:                    06948a4e-8542-411e-9321-238514bfff17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f8sql                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-207300-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-x9ppc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-207300-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-207300-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-2f56f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-207300-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-207300-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-207300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-207300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-207300-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node ha-207300-m03 event: Registered Node ha-207300-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-207300-m03 event: Registered Node ha-207300-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-207300-m03 event: Registered Node ha-207300-m03 in Controller
	
	
	Name:               ha-207300-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-207300-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-207300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_44_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:44:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-207300-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:58:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:55:19 +0000   Wed, 31 Jul 2024 22:44:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:55:19 +0000   Wed, 31 Jul 2024 22:44:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:55:19 +0000   Wed, 31 Jul 2024 22:44:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:55:19 +0000   Wed, 31 Jul 2024 22:45:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.23.92
	  Hostname:    ha-207300-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f70117c7f014c9589f4c2d27e329ed3
	  System UUID:                c5e5293d-eaba-d541-b8a5-504cb4eaddb4
	  Boot ID:                    aef0b564-5875-42cc-b564-615afe743bbe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2tn75       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-proxy-5gcfr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  RegisteredNode           14m                node-controller  Node ha-207300-m04 event: Registered Node ha-207300-m04 in Controller
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-207300-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-207300-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-207300-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-207300-m04 event: Registered Node ha-207300-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-207300-m04 event: Registered Node ha-207300-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-207300-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.795448] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000027] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 22:30] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.169389] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +29.815796] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +0.097880] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.514174] systemd-fstab-generator[1040]: Ignoring "noauto" option for root device
	[  +0.168230] systemd-fstab-generator[1052]: Ignoring "noauto" option for root device
	[  +0.215055] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +2.807586] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.191062] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.183350] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.269279] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[Jul31 22:31] systemd-fstab-generator[1420]: Ignoring "noauto" option for root device
	[  +0.097061] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.784760] systemd-fstab-generator[1678]: Ignoring "noauto" option for root device
	[  +7.388087] systemd-fstab-generator[1894]: Ignoring "noauto" option for root device
	[  +0.088756] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.219115] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.315294] systemd-fstab-generator[2388]: Ignoring "noauto" option for root device
	[ +15.143635] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.621970] kauditd_printk_skb: 29 callbacks suppressed
	[Jul31 22:35] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 22:38] hrtimer: interrupt took 3519619 ns
	
	
	==> etcd [f98bfdd5c190] <==
	{"level":"warn","ts":"2024-07-31T22:58:43.851064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.882291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.890246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.915416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.923162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.928225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.936341Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.948881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.957292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.962704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.97688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.987723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:43.998197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.004043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.010172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.022037Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.033875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.044694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.056542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.06607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.071582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.082484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.092833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.102981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:58:44.12228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3faabe9549613299","from":"3faabe9549613299","remote-peer-id":"a278d90fe05b03d6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:58:44 up 29 min,  0 users,  load average: 1.14, 0.59, 0.41
	Linux ha-207300 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1aa0807dc075] <==
	I0731 22:58:13.978289       1 main.go:322] Node ha-207300-m04 has CIDR [10.244.3.0/24] 
	I0731 22:58:23.984342       1 main.go:295] Handling node with IPs: map[172.17.28.136:{}]
	I0731 22:58:23.984461       1 main.go:322] Node ha-207300-m02 has CIDR [10.244.1.0/24] 
	I0731 22:58:23.985136       1 main.go:295] Handling node with IPs: map[172.17.27.253:{}]
	I0731 22:58:23.985729       1 main.go:322] Node ha-207300-m03 has CIDR [10.244.2.0/24] 
	I0731 22:58:23.987730       1 main.go:295] Handling node with IPs: map[172.17.23.92:{}]
	I0731 22:58:23.988482       1 main.go:322] Node ha-207300-m04 has CIDR [10.244.3.0/24] 
	I0731 22:58:23.989228       1 main.go:295] Handling node with IPs: map[172.17.21.92:{}]
	I0731 22:58:23.989299       1 main.go:299] handling current node
	I0731 22:58:33.979011       1 main.go:295] Handling node with IPs: map[172.17.21.92:{}]
	I0731 22:58:33.979583       1 main.go:299] handling current node
	I0731 22:58:33.979712       1 main.go:295] Handling node with IPs: map[172.17.28.136:{}]
	I0731 22:58:33.979724       1 main.go:322] Node ha-207300-m02 has CIDR [10.244.1.0/24] 
	I0731 22:58:33.980024       1 main.go:295] Handling node with IPs: map[172.17.27.253:{}]
	I0731 22:58:33.980169       1 main.go:322] Node ha-207300-m03 has CIDR [10.244.2.0/24] 
	I0731 22:58:33.981850       1 main.go:295] Handling node with IPs: map[172.17.23.92:{}]
	I0731 22:58:33.981897       1 main.go:322] Node ha-207300-m04 has CIDR [10.244.3.0/24] 
	I0731 22:58:43.975886       1 main.go:295] Handling node with IPs: map[172.17.27.253:{}]
	I0731 22:58:43.975959       1 main.go:322] Node ha-207300-m03 has CIDR [10.244.2.0/24] 
	I0731 22:58:43.976108       1 main.go:295] Handling node with IPs: map[172.17.23.92:{}]
	I0731 22:58:43.976117       1 main.go:322] Node ha-207300-m04 has CIDR [10.244.3.0/24] 
	I0731 22:58:43.976178       1 main.go:295] Handling node with IPs: map[172.17.21.92:{}]
	I0731 22:58:43.976186       1 main.go:299] handling current node
	I0731 22:58:43.976200       1 main.go:295] Handling node with IPs: map[172.17.28.136:{}]
	I0731 22:58:43.976205       1 main.go:322] Node ha-207300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ca42a9c8944b] <==
	E0731 22:39:10.862498       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0731 22:39:10.862807       1 timeout.go:142] post-timeout activity - time-elapsed: 2.133912ms, PATCH "/api/v1/namespaces/default/events/ha-207300-m03.17e76d4aab696f8d" result: <nil>
	E0731 22:39:10.884110       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 23.350129ms, panicked: false, err: context canceled, panic-reason: <nil>
	E0731 22:40:24.863994       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54458: use of closed network connection
	E0731 22:40:26.474163       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54460: use of closed network connection
	E0731 22:40:27.097661       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54462: use of closed network connection
	E0731 22:40:27.666208       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54464: use of closed network connection
	E0731 22:40:28.195403       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54466: use of closed network connection
	E0731 22:40:28.754879       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54468: use of closed network connection
	E0731 22:40:29.313353       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54470: use of closed network connection
	E0731 22:40:29.858863       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54472: use of closed network connection
	E0731 22:40:30.411250       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54474: use of closed network connection
	E0731 22:40:31.401846       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54477: use of closed network connection
	E0731 22:40:41.946824       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54479: use of closed network connection
	E0731 22:40:42.481604       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54484: use of closed network connection
	E0731 22:40:53.012848       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54487: use of closed network connection
	E0731 22:40:53.510279       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54490: use of closed network connection
	E0731 22:41:04.056959       1 conn.go:339] Error on socket receive: read tcp 172.17.31.254:8443->172.17.16.1:54492: use of closed network connection
	I0731 22:58:06.583231       1 trace.go:236] Trace[493392110]: "Update" accept:application/json, */*,audit-id:aa92bf4f-9eee-4500-b8ce-11c0c6848671,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (31-Jul-2024 22:58:06.027) (total time: 555ms):
	Trace[493392110]: ["GuaranteedUpdate etcd3" audit-id:aa92bf4f-9eee-4500-b8ce-11c0c6848671,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 555ms (22:58:06.028)
	Trace[493392110]:  ---"Txn call completed" 554ms (22:58:06.582)]
	Trace[493392110]: [555.370707ms] [555.370707ms] END
	I0731 22:58:06.680741       1 trace.go:236] Trace[2113365438]: "Get" accept:application/json, */*,audit-id:d376fdac-bcbd-4b52-a430-7f16adbeb5b9,client:172.17.21.92,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (31-Jul-2024 22:58:06.035) (total time: 645ms):
	Trace[2113365438]: ---"About to write a response" 645ms (22:58:06.680)
	Trace[2113365438]: [645.434357ms] [645.434357ms] END
	
	
	==> kube-controller-manager [23266576b86c] <==
	I0731 22:39:14.885189       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-207300-m03"
	I0731 22:40:17.858337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="211.080858ms"
	I0731 22:40:17.899017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.631023ms"
	I0731 22:40:18.026266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.074597ms"
	I0731 22:40:18.309325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="282.922651ms"
	I0731 22:40:18.841373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="531.759416ms"
	E0731 22:40:18.841453       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 22:40:18.841534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.9µs"
	I0731 22:40:18.847364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.5µs"
	I0731 22:40:19.049343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.757032ms"
	I0731 22:40:19.049446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.1µs"
	I0731 22:40:19.966028       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.001µs"
	I0731 22:40:21.084401       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="131.418524ms"
	I0731 22:40:21.085052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.701µs"
	I0731 22:40:21.637402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.493145ms"
	I0731 22:40:21.637555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.5µs"
	I0731 22:40:21.853997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.443764ms"
	I0731 22:40:21.854157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.1µs"
	I0731 22:44:36.246695       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-207300-m04\" does not exist"
	I0731 22:44:36.387209       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-207300-m04" podCIDRs=["10.244.3.0/24"]
	I0731 22:44:40.195233       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-207300-m04"
	I0731 22:45:09.871552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-207300-m04"
	I0731 22:57:44.791726       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-207300-m04"
	I0731 22:57:45.195103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.907092ms"
	I0731 22:57:45.195212       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.3µs"
	
	
	==> kube-proxy [76a17591c6fa] <==
	I0731 22:31:36.765718       1 server_linux.go:69] "Using iptables proxy"
	I0731 22:31:36.788887       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.21.92"]
	I0731 22:31:36.871466       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 22:31:36.871591       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 22:31:36.871616       1 server_linux.go:165] "Using iptables Proxier"
	I0731 22:31:36.876289       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 22:31:36.877211       1 server.go:872] "Version info" version="v1.30.3"
	I0731 22:31:36.877243       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:31:36.878788       1 config.go:192] "Starting service config controller"
	I0731 22:31:36.878948       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 22:31:36.879378       1 config.go:101] "Starting endpoint slice config controller"
	I0731 22:31:36.879466       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 22:31:36.880475       1 config.go:319] "Starting node config controller"
	I0731 22:31:36.880510       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 22:31:36.980656       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 22:31:36.980672       1 shared_informer.go:320] Caches are synced for node config
	I0731 22:31:36.980711       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [72d884b0f883] <==
	W0731 22:31:19.437090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 22:31:19.437299       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 22:31:19.490439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 22:31:19.490491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 22:31:19.511747       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 22:31:19.511989       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 22:31:19.572668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 22:31:19.573725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 22:31:19.704448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:31:19.704593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:31:19.721793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 22:31:19.721834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 22:31:19.802377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:31:19.802488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:31:19.807167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:31:19.807637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:31:19.863708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:31:19.864270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:31:19.899344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 22:31:19.899547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 22:31:19.909474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 22:31:19.909805       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 22:31:19.911512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 22:31:19.911557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0731 22:31:22.682143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 22:54:21 ha-207300 kubelet[2395]: E0731 22:54:21.706869    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:54:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:54:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:54:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:54:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:55:21 ha-207300 kubelet[2395]: E0731 22:55:21.708075    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:55:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:55:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:55:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:55:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:56:21 ha-207300 kubelet[2395]: E0731 22:56:21.705539    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:56:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:56:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:56:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:56:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:57:21 ha-207300 kubelet[2395]: E0731 22:57:21.714310    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:57:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:57:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:57:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:57:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:58:21 ha-207300 kubelet[2395]: E0731 22:58:21.710997    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:58:21 ha-207300 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:58:21 ha-207300 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:58:21 ha-207300 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:58:21 ha-207300 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:58:34.740449    9988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-207300 -n ha-207300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-207300 -n ha-207300: (13.2432058s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-207300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (46.51s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (227.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-345700 --driver=hyperv
E0731 23:02:53.159099   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 23:03:16.582773   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 23:05:13.393843   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p image-345700 --driver=hyperv: exit status 90 (3m35.6719392s)

                                                
                                                
-- stdout --
	* [image-345700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "image-345700" primary control-plane node in "image-345700" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:01:37.891346   12716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 31 23:03:39 image-345700 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 23:03:39 image-345700 dockerd[665]: time="2024-07-31T23:03:39.309396145Z" level=info msg="Starting up"
	Jul 31 23:03:39 image-345700 dockerd[665]: time="2024-07-31T23:03:39.310433411Z" level=info msg="containerd not running, starting managed containerd"
	Jul 31 23:03:39 image-345700 dockerd[665]: time="2024-07-31T23:03:39.311777697Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.343803050Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.371476024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.371513726Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.371772543Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.371798445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.371877350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.371897751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.372255874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.372354980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.372378082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.372389383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.372487189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.372858113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.376092820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.376196227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.376405640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.376581451Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.376690958Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.376838468Z" level=info msg="metadata content store policy set" policy=shared
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.404212622Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.404298728Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.404468039Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.404490240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.404504941Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.404608848Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405217187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405412399Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405459902Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405511006Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405528407Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405542108Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405553708Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405566709Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405580610Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405599711Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405613812Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405624813Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405643214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405658115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405670116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405685817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405700718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405722619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405736320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405748221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405760622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405774623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405785623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405802024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405814425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405828426Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405848827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405918832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.405940033Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.406063841Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.406314357Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.406335559Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.406348759Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.406359260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.406372261Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.406382062Z" level=info msg="NRI interface is disabled by configuration."
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.406717783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.407081206Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.407226816Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 31 23:03:39 image-345700 dockerd[671]: time="2024-07-31T23:03:39.407281319Z" level=info msg="containerd successfully booted in 0.065356s"
	Jul 31 23:03:40 image-345700 dockerd[665]: time="2024-07-31T23:03:40.389475253Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 31 23:03:40 image-345700 dockerd[665]: time="2024-07-31T23:03:40.419402052Z" level=info msg="Loading containers: start."
	Jul 31 23:03:40 image-345700 dockerd[665]: time="2024-07-31T23:03:40.584229857Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 31 23:03:40 image-345700 dockerd[665]: time="2024-07-31T23:03:40.792162209Z" level=info msg="Loading containers: done."
	Jul 31 23:03:40 image-345700 dockerd[665]: time="2024-07-31T23:03:40.821278847Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Jul 31 23:03:40 image-345700 dockerd[665]: time="2024-07-31T23:03:40.821478061Z" level=info msg="Daemon has completed initialization"
	Jul 31 23:03:40 image-345700 dockerd[665]: time="2024-07-31T23:03:40.934587382Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 31 23:03:40 image-345700 dockerd[665]: time="2024-07-31T23:03:40.934847700Z" level=info msg="API listen on [::]:2376"
	Jul 31 23:03:40 image-345700 systemd[1]: Started Docker Application Container Engine.
	Jul 31 23:04:12 image-345700 systemd[1]: Stopping Docker Application Container Engine...
	Jul 31 23:04:12 image-345700 dockerd[665]: time="2024-07-31T23:04:12.097516025Z" level=info msg="Processing signal 'terminated'"
	Jul 31 23:04:12 image-345700 dockerd[665]: time="2024-07-31T23:04:12.100637156Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 31 23:04:12 image-345700 dockerd[665]: time="2024-07-31T23:04:12.100778658Z" level=info msg="Daemon shutdown complete"
	Jul 31 23:04:12 image-345700 dockerd[665]: time="2024-07-31T23:04:12.100852758Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 31 23:04:12 image-345700 dockerd[665]: time="2024-07-31T23:04:12.100876359Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 31 23:04:13 image-345700 systemd[1]: docker.service: Deactivated successfully.
	Jul 31 23:04:13 image-345700 systemd[1]: Stopped Docker Application Container Engine.
	Jul 31 23:04:13 image-345700 systemd[1]: Starting Docker Application Container Engine...
	Jul 31 23:04:13 image-345700 dockerd[1077]: time="2024-07-31T23:04:13.156840846Z" level=info msg="Starting up"
	Jul 31 23:05:13 image-345700 dockerd[1077]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 31 23:05:13 image-345700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 31 23:05:13 image-345700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 31 23:05:13 image-345700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p image-345700 --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p image-345700 -n image-345700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p image-345700 -n image-345700: exit status 6 (12.109445s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:05:13.573366    7932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0731 23:05:25.501082    7932 status.go:417] kubeconfig endpoint: get endpoint: "image-345700" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "image-345700" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestImageBuild/serial/Setup (227.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (56.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-4hgmz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-4hgmz -- sh -c "ping -c 1 172.17.16.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-4hgmz -- sh -c "ping -c 1 172.17.16.1": exit status 1 (10.4729527s)

                                                
                                                
-- stdout --
	PING 172.17.16.1 (172.17.16.1): 56 data bytes
	
	--- 172.17.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:36:48.938960    9952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.16.1) from pod (busybox-fc5497c4f-4hgmz): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-lxslb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-lxslb -- sh -c "ping -c 1 172.17.16.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-lxslb -- sh -c "ping -c 1 172.17.16.1": exit status 1 (10.4598387s)

                                                
                                                
-- stdout --
	PING 172.17.16.1 (172.17.16.1): 56 data bytes
	
	--- 172.17.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:36:59.914928   10216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.16.1) from pod (busybox-fc5497c4f-lxslb): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-411400 -n multinode-411400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-411400 -n multinode-411400: (11.8651514s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 logs -n 25: (8.4590356s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-526800 ssh -- ls                    | mount-start-2-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:25 UTC | 31 Jul 24 23:25 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-526800                           | mount-start-1-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:25 UTC | 31 Jul 24 23:26 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-526800 ssh -- ls                    | mount-start-2-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:26 UTC | 31 Jul 24 23:26 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-526800                           | mount-start-2-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:26 UTC | 31 Jul 24 23:26 UTC |
	| start   | -p mount-start-2-526800                           | mount-start-2-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:26 UTC | 31 Jul 24 23:28 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --profile mount-start-2-526800 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-526800 ssh -- ls                    | mount-start-2-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:28 UTC | 31 Jul 24 23:28 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-526800                           | mount-start-2-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:28 UTC | 31 Jul 24 23:29 UTC |
	| delete  | -p mount-start-1-526800                           | mount-start-1-526800 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:29 UTC |
	| start   | -p multinode-411400                               | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:36 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- apply -f                   | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- rollout                    | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- get pods -o                | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- get pods -o                | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | busybox-fc5497c4f-4hgmz --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | busybox-fc5497c4f-lxslb --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | busybox-fc5497c4f-4hgmz --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | busybox-fc5497c4f-lxslb --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | busybox-fc5497c4f-4hgmz -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | busybox-fc5497c4f-lxslb -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- get pods -o                | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | busybox-fc5497c4f-4hgmz                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC |                     |
	|         | busybox-fc5497c4f-4hgmz -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.16.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:36 UTC | 31 Jul 24 23:36 UTC |
	|         | busybox-fc5497c4f-lxslb                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-411400 -- exec                       | multinode-411400     | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:37 UTC |                     |
	|         | busybox-fc5497c4f-lxslb -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.16.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:29:24
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:29:24.746986   12704 out.go:291] Setting OutFile to fd 648 ...
	I0731 23:29:24.747646   12704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:29:24.747646   12704 out.go:304] Setting ErrFile to fd 1336...
	I0731 23:29:24.747646   12704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:29:24.771113   12704 out.go:298] Setting JSON to false
	I0731 23:29:24.775124   12704 start.go:129] hostinfo: {"hostname":"minikube6","uptime":544506,"bootTime":1721924058,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 23:29:24.775124   12704 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 23:29:24.780382   12704 out.go:177] * [multinode-411400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 23:29:24.785942   12704 notify.go:220] Checking for updates...
	I0731 23:29:24.788965   12704 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:29:24.791685   12704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:29:24.794241   12704 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 23:29:24.796748   12704 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:29:24.799648   12704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:29:24.802958   12704 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:29:24.802958   12704 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:29:29.838664   12704 out.go:177] * Using the hyperv driver based on user configuration
	I0731 23:29:29.843281   12704 start.go:297] selected driver: hyperv
	I0731 23:29:29.843281   12704 start.go:901] validating driver "hyperv" against <nil>
	I0731 23:29:29.843281   12704 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:29:29.893912   12704 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 23:29:29.895870   12704 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:29:29.895959   12704 cni.go:84] Creating CNI manager for ""
	I0731 23:29:29.896075   12704 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 23:29:29.896131   12704 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 23:29:29.896450   12704 start.go:340] cluster config:
	{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:29:29.896955   12704 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:29:29.903718   12704 out.go:177] * Starting "multinode-411400" primary control-plane node in "multinode-411400" cluster
	I0731 23:29:29.907417   12704 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:29:29.907417   12704 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 23:29:29.907417   12704 cache.go:56] Caching tarball of preloaded images
	I0731 23:29:29.907417   12704 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 23:29:29.907417   12704 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 23:29:29.908618   12704 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:29:29.908868   12704 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json: {Name:mkac11ff1443b0f9736d53bb6f0953ef8676c47e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:29.909142   12704 start.go:360] acquireMachinesLock for multinode-411400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:29:29.910179   12704 start.go:364] duration metric: took 1.037ms to acquireMachinesLock for "multinode-411400"
	I0731 23:29:29.910448   12704 start.go:93] Provisioning new machine with config: &{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 23:29:29.910448   12704 start.go:125] createHost starting for "" (driver="hyperv")
	I0731 23:29:29.917892   12704 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 23:29:29.918971   12704 start.go:159] libmachine.API.Create for "multinode-411400" (driver="hyperv")
	I0731 23:29:29.918971   12704 client.go:168] LocalClient.Create starting
	I0731 23:29:29.919331   12704 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 23:29:29.919331   12704 main.go:141] libmachine: Decoding PEM data...
	I0731 23:29:29.920016   12704 main.go:141] libmachine: Parsing certificate...
	I0731 23:29:29.920201   12704 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 23:29:29.920233   12704 main.go:141] libmachine: Decoding PEM data...
	I0731 23:29:29.920233   12704 main.go:141] libmachine: Parsing certificate...
	I0731 23:29:29.920233   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 23:29:31.833594   12704 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 23:29:31.833594   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:31.834231   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 23:29:33.464679   12704 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 23:29:33.464679   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:33.465732   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 23:29:34.883449   12704 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 23:29:34.883449   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:34.884280   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 23:29:38.266950   12704 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 23:29:38.267573   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:38.269718   12704 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 23:29:38.751404   12704 main.go:141] libmachine: Creating SSH key...
	I0731 23:29:38.928081   12704 main.go:141] libmachine: Creating VM...
	I0731 23:29:38.928081   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 23:29:41.614624   12704 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 23:29:41.614624   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:41.615665   12704 main.go:141] libmachine: Using switch "Default Switch"
	I0731 23:29:41.615665   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 23:29:43.268780   12704 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 23:29:43.269796   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:43.269907   12704 main.go:141] libmachine: Creating VHD
	I0731 23:29:43.270010   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 23:29:46.927208   12704 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B8745EF1-B28F-4B69-B13D-00DEF9AB6FA1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 23:29:46.927208   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:46.927432   12704 main.go:141] libmachine: Writing magic tar header
	I0731 23:29:46.927669   12704 main.go:141] libmachine: Writing SSH key tar header
	I0731 23:29:46.937969   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 23:29:50.010801   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:29:50.011306   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:50.011306   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\disk.vhd' -SizeBytes 20000MB
	I0731 23:29:52.478128   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:29:52.478128   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:52.478259   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-411400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0731 23:29:55.874742   12704 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-411400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 23:29:55.874742   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:55.875298   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-411400 -DynamicMemoryEnabled $false
	I0731 23:29:58.012710   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:29:58.012710   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:29:58.013298   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-411400 -Count 2
	I0731 23:30:00.061868   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:30:00.062203   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:00.062203   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-411400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\boot2docker.iso'
	I0731 23:30:02.573442   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:30:02.574495   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:02.574513   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-411400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\disk.vhd'
	I0731 23:30:05.146242   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:30:05.146418   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:05.146418   12704 main.go:141] libmachine: Starting VM...
	I0731 23:30:05.146418   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-411400
	I0731 23:30:08.308473   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:30:08.309111   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:08.309111   12704 main.go:141] libmachine: Waiting for host to start...
	I0731 23:30:08.309176   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:10.584119   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:10.585134   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:10.585134   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:13.130254   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:30:13.130764   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:14.141667   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:16.437621   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:16.437621   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:16.438750   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:19.023125   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:30:19.023125   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:20.027688   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:22.298443   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:22.298443   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:22.298687   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:24.852932   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:30:24.852932   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:25.862471   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:28.041687   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:28.041687   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:28.041687   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:30.550625   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:30:30.550625   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:31.559087   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:33.740617   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:33.740617   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:33.740686   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:36.231971   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:30:36.232160   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:36.232160   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:38.326336   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:38.327120   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:38.327120   12704 machine.go:94] provisionDockerMachine start ...
	I0731 23:30:38.327120   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:40.436238   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:40.437308   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:40.437415   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:42.885803   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:30:42.885919   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:42.890633   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:30:42.904130   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.20.56 22 <nil> <nil>}
	I0731 23:30:42.904130   12704 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:30:43.037474   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 23:30:43.037594   12704 buildroot.go:166] provisioning hostname "multinode-411400"
	I0731 23:30:43.037802   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:45.066822   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:45.067021   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:45.067148   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:47.430881   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:30:47.430881   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:47.437043   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:30:47.437796   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.20.56 22 <nil> <nil>}
	I0731 23:30:47.437796   12704 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-411400 && echo "multinode-411400" | sudo tee /etc/hostname
	I0731 23:30:47.595537   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-411400
	
	I0731 23:30:47.595537   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:49.635764   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:49.635764   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:49.636110   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:52.106159   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:30:52.106159   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:52.111457   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:30:52.112191   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.20.56 22 <nil> <nil>}
	I0731 23:30:52.112191   12704 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-411400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-411400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-411400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:30:52.252699   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:30:52.252699   12704 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 23:30:52.252699   12704 buildroot.go:174] setting up certificates
	I0731 23:30:52.252699   12704 provision.go:84] configureAuth start
	I0731 23:30:52.252699   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:54.284314   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:54.285384   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:54.285384   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:30:56.675681   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:30:56.675681   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:56.675681   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:30:58.705466   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:30:58.705466   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:30:58.705466   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:01.071753   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:01.071753   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:01.071753   12704 provision.go:143] copyHostCerts
	I0731 23:31:01.072427   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 23:31:01.073062   12704 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 23:31:01.073062   12704 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 23:31:01.073774   12704 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 23:31:01.074602   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 23:31:01.074602   12704 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 23:31:01.074602   12704 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 23:31:01.075514   12704 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 23:31:01.076637   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 23:31:01.076674   12704 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 23:31:01.076674   12704 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 23:31:01.077445   12704 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 23:31:01.078804   12704 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-411400 san=[127.0.0.1 172.17.20.56 localhost minikube multinode-411400]
	I0731 23:31:01.464344   12704 provision.go:177] copyRemoteCerts
	I0731 23:31:01.475355   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:31:01.476339   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:03.550217   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:03.550217   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:03.550479   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:06.021991   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:06.022060   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:06.022060   12704 sshutil.go:53] new ssh client: &{IP:172.17.20.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:31:06.144933   12704 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6695195s)
	I0731 23:31:06.144933   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 23:31:06.145801   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 23:31:06.191005   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 23:31:06.191005   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 23:31:06.235537   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 23:31:06.235537   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 23:31:06.280816   12704 provision.go:87] duration metric: took 14.0279406s to configureAuth
	I0731 23:31:06.280816   12704 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:31:06.281945   12704 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:31:06.281945   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:08.366870   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:08.367720   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:08.367720   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:10.813228   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:10.813228   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:10.818620   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:10.819192   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.20.56 22 <nil> <nil>}
	I0731 23:31:10.819271   12704 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 23:31:10.956744   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 23:31:10.956744   12704 buildroot.go:70] root file system type: tmpfs
	I0731 23:31:10.956744   12704 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 23:31:10.957276   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:13.078088   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:13.078088   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:13.078088   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:15.558870   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:15.558870   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:15.564279   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:15.565027   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.20.56 22 <nil> <nil>}
	I0731 23:31:15.565027   12704 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 23:31:15.716916   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 23:31:15.716916   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:17.771029   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:17.771029   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:17.771196   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:20.264877   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:20.265744   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:20.270939   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:20.271071   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.20.56 22 <nil> <nil>}
	I0731 23:31:20.271071   12704 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 23:31:22.509723   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 23:31:22.509761   12704 machine.go:97] duration metric: took 44.1820818s to provisionDockerMachine
	I0731 23:31:22.509826   12704 client.go:171] duration metric: took 1m52.5893621s to LocalClient.Create
	I0731 23:31:22.509826   12704 start.go:167] duration metric: took 1m52.5894266s to libmachine.API.Create "multinode-411400"
	I0731 23:31:22.509826   12704 start.go:293] postStartSetup for "multinode-411400" (driver="hyperv")
	I0731 23:31:22.509826   12704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:31:22.523410   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:31:22.523410   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:24.609232   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:24.609330   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:24.609330   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:27.101302   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:27.102139   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:27.102751   12704 sshutil.go:53] new ssh client: &{IP:172.17.20.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:31:27.205094   12704 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.681625s)
	I0731 23:31:27.217035   12704 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:31:27.223298   12704 command_runner.go:130] > NAME=Buildroot
	I0731 23:31:27.223298   12704 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 23:31:27.223298   12704 command_runner.go:130] > ID=buildroot
	I0731 23:31:27.223298   12704 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 23:31:27.223298   12704 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 23:31:27.223298   12704 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:31:27.223298   12704 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 23:31:27.224043   12704 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 23:31:27.224783   12704 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 23:31:27.224783   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 23:31:27.237685   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:31:27.255105   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 23:31:27.300233   12704 start.go:296] duration metric: took 4.7903469s for postStartSetup
	I0731 23:31:27.303469   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:29.414160   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:29.414160   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:29.414160   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:31.897162   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:31.897162   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:31.897511   12704 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:31:31.900106   12704 start.go:128] duration metric: took 2m1.9881111s to createHost
	I0731 23:31:31.900106   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:34.033984   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:34.033984   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:34.034516   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:36.534125   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:36.534125   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:36.540073   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:36.540884   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.20.56 22 <nil> <nil>}
	I0731 23:31:36.540884   12704 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:31:36.672809   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722468696.693077216
	
	I0731 23:31:36.672809   12704 fix.go:216] guest clock: 1722468696.693077216
	I0731 23:31:36.672809   12704 fix.go:229] Guest: 2024-07-31 23:31:36.693077216 +0000 UTC Remote: 2024-07-31 23:31:31.9001068 +0000 UTC m=+127.309046901 (delta=4.792970416s)
	I0731 23:31:36.673056   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:38.681918   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:38.682009   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:38.682089   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:41.109949   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:41.110709   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:41.117203   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:41.117369   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.20.56 22 <nil> <nil>}
	I0731 23:31:41.117369   12704 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722468696
	I0731 23:31:41.274312   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 23:31:36 UTC 2024
	
	I0731 23:31:41.274477   12704 fix.go:236] clock set: Wed Jul 31 23:31:36 UTC 2024
	 (err=<nil>)
	I0731 23:31:41.274477   12704 start.go:83] releasing machines lock for "multinode-411400", held for 2m11.3625568s
	I0731 23:31:41.274746   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:43.295330   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:43.296367   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:43.296441   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:45.728319   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:45.728495   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:45.732881   12704 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 23:31:45.732881   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:45.744836   12704 ssh_runner.go:195] Run: cat /version.json
	I0731 23:31:45.744836   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:31:47.882108   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:47.882108   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:47.882401   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:47.886060   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:31:47.886060   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:47.886295   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:31:50.509456   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:50.509456   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:50.510268   12704 sshutil.go:53] new ssh client: &{IP:172.17.20.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:31:50.532452   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:31:50.533155   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:31:50.533875   12704 sshutil.go:53] new ssh client: &{IP:172.17.20.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:31:50.605352   12704 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0731 23:31:50.606024   12704 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8724094s)
	W0731 23:31:50.606024   12704 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 23:31:50.622674   12704 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 23:31:50.622674   12704 ssh_runner.go:235] Completed: cat /version.json: (4.8777767s)
	I0731 23:31:50.634576   12704 ssh_runner.go:195] Run: systemctl --version
	I0731 23:31:50.643558   12704 command_runner.go:130] > systemd 252 (252)
	I0731 23:31:50.643558   12704 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 23:31:50.654746   12704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:31:50.662192   12704 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 23:31:50.663467   12704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:31:50.675334   12704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:31:50.703448   12704 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0731 23:31:50.703448   12704 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:31:50.703887   12704 start.go:495] detecting cgroup driver to use...
	I0731 23:31:50.704192   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0731 23:31:50.718581   12704 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 23:31:50.718581   12704 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 23:31:50.744221   12704 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0731 23:31:50.757278   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 23:31:50.795222   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 23:31:50.813472   12704 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 23:31:50.825342   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 23:31:50.856643   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:31:50.887626   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 23:31:50.915502   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:31:50.948679   12704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:31:50.978160   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 23:31:51.006540   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 23:31:51.036060   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 23:31:51.076013   12704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:31:51.096721   12704 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 23:31:51.107623   12704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:31:51.144197   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:51.336389   12704 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 23:31:51.365652   12704 start.go:495] detecting cgroup driver to use...
	I0731 23:31:51.379082   12704 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 23:31:51.401991   12704 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0731 23:31:51.403115   12704 command_runner.go:130] > [Unit]
	I0731 23:31:51.404047   12704 command_runner.go:130] > Description=Docker Application Container Engine
	I0731 23:31:51.404097   12704 command_runner.go:130] > Documentation=https://docs.docker.com
	I0731 23:31:51.404097   12704 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0731 23:31:51.404097   12704 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0731 23:31:51.404097   12704 command_runner.go:130] > StartLimitBurst=3
	I0731 23:31:51.404097   12704 command_runner.go:130] > StartLimitIntervalSec=60
	I0731 23:31:51.404097   12704 command_runner.go:130] > [Service]
	I0731 23:31:51.404097   12704 command_runner.go:130] > Type=notify
	I0731 23:31:51.404097   12704 command_runner.go:130] > Restart=on-failure
	I0731 23:31:51.404097   12704 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0731 23:31:51.404097   12704 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0731 23:31:51.404345   12704 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0731 23:31:51.404345   12704 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0731 23:31:51.404345   12704 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0731 23:31:51.404345   12704 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0731 23:31:51.404345   12704 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0731 23:31:51.404406   12704 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0731 23:31:51.404406   12704 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0731 23:31:51.404406   12704 command_runner.go:130] > ExecStart=
	I0731 23:31:51.404406   12704 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0731 23:31:51.404471   12704 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0731 23:31:51.404471   12704 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0731 23:31:51.404471   12704 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0731 23:31:51.404471   12704 command_runner.go:130] > LimitNOFILE=infinity
	I0731 23:31:51.404471   12704 command_runner.go:130] > LimitNPROC=infinity
	I0731 23:31:51.404471   12704 command_runner.go:130] > LimitCORE=infinity
	I0731 23:31:51.404471   12704 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0731 23:31:51.404471   12704 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0731 23:31:51.404471   12704 command_runner.go:130] > TasksMax=infinity
	I0731 23:31:51.404471   12704 command_runner.go:130] > TimeoutStartSec=0
	I0731 23:31:51.404621   12704 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0731 23:31:51.404621   12704 command_runner.go:130] > Delegate=yes
	I0731 23:31:51.404645   12704 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0731 23:31:51.404645   12704 command_runner.go:130] > KillMode=process
	I0731 23:31:51.404645   12704 command_runner.go:130] > [Install]
	I0731 23:31:51.404645   12704 command_runner.go:130] > WantedBy=multi-user.target
	I0731 23:31:51.416684   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:31:51.446584   12704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:31:51.489153   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:31:51.522288   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:31:51.557140   12704 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 23:31:51.619853   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:31:51.640264   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:31:51.672672   12704 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0731 23:31:51.686523   12704 ssh_runner.go:195] Run: which cri-dockerd
	I0731 23:31:51.694905   12704 command_runner.go:130] > /usr/bin/cri-dockerd
	I0731 23:31:51.708406   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 23:31:51.726155   12704 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 23:31:51.767729   12704 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 23:31:51.948294   12704 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 23:31:52.145281   12704 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 23:31:52.145543   12704 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 23:31:52.188354   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:52.371209   12704 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 23:31:54.933143   12704 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5619015s)
	I0731 23:31:54.945461   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 23:31:54.980408   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:31:55.013996   12704 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 23:31:55.196419   12704 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 23:31:55.375869   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:55.576262   12704 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 23:31:55.622100   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:31:55.659769   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:55.844687   12704 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 23:31:55.941197   12704 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 23:31:55.953697   12704 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 23:31:55.961058   12704 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0731 23:31:55.961753   12704 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 23:31:55.961753   12704 command_runner.go:130] > Device: 0,22	Inode: 880         Links: 1
	I0731 23:31:55.961753   12704 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0731 23:31:55.961753   12704 command_runner.go:130] > Access: 2024-07-31 23:31:55.890724238 +0000
	I0731 23:31:55.961753   12704 command_runner.go:130] > Modify: 2024-07-31 23:31:55.890724238 +0000
	I0731 23:31:55.961753   12704 command_runner.go:130] > Change: 2024-07-31 23:31:55.893724244 +0000
	I0731 23:31:55.961753   12704 command_runner.go:130] >  Birth: -
	I0731 23:31:55.961753   12704 start.go:563] Will wait 60s for crictl version
	I0731 23:31:55.973184   12704 ssh_runner.go:195] Run: which crictl
	I0731 23:31:55.979799   12704 command_runner.go:130] > /usr/bin/crictl
	I0731 23:31:55.991564   12704 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:31:56.044887   12704 command_runner.go:130] > Version:  0.1.0
	I0731 23:31:56.045604   12704 command_runner.go:130] > RuntimeName:  docker
	I0731 23:31:56.045604   12704 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0731 23:31:56.045604   12704 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 23:31:56.045689   12704 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 23:31:56.054470   12704 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:31:56.086988   12704 command_runner.go:130] > 27.1.1
	I0731 23:31:56.097527   12704 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:31:56.126681   12704 command_runner.go:130] > 27.1.1
	I0731 23:31:56.133261   12704 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 23:31:56.133462   12704 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 23:31:56.137540   12704 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 23:31:56.137540   12704 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 23:31:56.137540   12704 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 23:31:56.137540   12704 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 23:31:56.141426   12704 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 23:31:56.141509   12704 ip.go:210] interface addr: 172.17.16.1/20
	I0731 23:31:56.154392   12704 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 23:31:56.160661   12704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:31:56.182206   12704 kubeadm.go:883] updating cluster {Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:31:56.182416   12704 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:31:56.191343   12704 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 23:31:56.214278   12704 docker.go:685] Got preloaded images: 
	I0731 23:31:56.214278   12704 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0731 23:31:56.224748   12704 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 23:31:56.242721   12704 command_runner.go:139] > {"Repositories":{}}
	I0731 23:31:56.255178   12704 ssh_runner.go:195] Run: which lz4
	I0731 23:31:56.261725   12704 command_runner.go:130] > /usr/bin/lz4
	I0731 23:31:56.261872   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 23:31:56.272678   12704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 23:31:56.278172   12704 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 23:31:56.279012   12704 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 23:31:56.279224   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0731 23:31:58.491191   12704 docker.go:649] duration metric: took 2.2289276s to copy over tarball
	I0731 23:31:58.504554   12704 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 23:32:07.140888   12704 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6362235s)
	I0731 23:32:07.140888   12704 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 23:32:07.202427   12704 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 23:32:07.219322   12704 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0731 23:32:07.219557   12704 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0731 23:32:07.271753   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:32:07.473336   12704 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 23:32:10.824262   12704 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3508832s)
	I0731 23:32:10.832347   12704 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 23:32:10.858038   12704 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0731 23:32:10.858038   12704 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0731 23:32:10.858038   12704 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0731 23:32:10.858038   12704 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0731 23:32:10.858038   12704 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0731 23:32:10.858038   12704 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0731 23:32:10.858038   12704 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0731 23:32:10.858038   12704 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:32:10.858038   12704 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 23:32:10.858038   12704 cache_images.go:84] Images are preloaded, skipping loading
	I0731 23:32:10.858038   12704 kubeadm.go:934] updating node { 172.17.20.56 8443 v1.30.3 docker true true} ...
	I0731 23:32:10.858038   12704 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-411400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.20.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:32:10.865037   12704 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 23:32:10.929048   12704 command_runner.go:130] > cgroupfs
	I0731 23:32:10.929300   12704 cni.go:84] Creating CNI manager for ""
	I0731 23:32:10.929300   12704 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 23:32:10.929300   12704 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:32:10.929389   12704 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.20.56 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-411400 NodeName:multinode-411400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.20.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.20.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 23:32:10.929493   12704 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.20.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-411400"
	  kubeletExtraArgs:
	    node-ip: 172.17.20.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.20.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:32:10.941949   12704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:32:10.960270   12704 command_runner.go:130] > kubeadm
	I0731 23:32:10.960347   12704 command_runner.go:130] > kubectl
	I0731 23:32:10.960347   12704 command_runner.go:130] > kubelet
	I0731 23:32:10.960347   12704 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:32:10.972774   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:32:10.989094   12704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0731 23:32:11.018314   12704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:32:11.052141   12704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0731 23:32:11.095473   12704 ssh_runner.go:195] Run: grep 172.17.20.56	control-plane.minikube.internal$ /etc/hosts
	I0731 23:32:11.102661   12704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.20.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:32:11.137516   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:32:11.330770   12704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:32:11.361132   12704 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400 for IP: 172.17.20.56
	I0731 23:32:11.361293   12704 certs.go:194] generating shared ca certs ...
	I0731 23:32:11.361293   12704 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:11.362016   12704 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 23:32:11.362319   12704 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 23:32:11.362319   12704 certs.go:256] generating profile certs ...
	I0731 23:32:11.363343   12704 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\client.key
	I0731 23:32:11.363529   12704 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\client.crt with IP's: []
	I0731 23:32:11.467879   12704 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\client.crt ...
	I0731 23:32:11.467879   12704 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\client.crt: {Name:mk58e12064f9d36605364a8cd9af30ba04438fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:11.469846   12704 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\client.key ...
	I0731 23:32:11.469846   12704 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\client.key: {Name:mk1f4023cc2d2bba23a67cd4cdeaeb343d8f0f2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:11.470929   12704 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.5a01dc5f
	I0731 23:32:11.470929   12704 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.5a01dc5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.20.56]
	I0731 23:32:11.828529   12704 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.5a01dc5f ...
	I0731 23:32:11.828529   12704 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.5a01dc5f: {Name:mk82e4d23ca75d6d340b36ef13471804cf69cf02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:11.830210   12704 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.5a01dc5f ...
	I0731 23:32:11.830210   12704 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.5a01dc5f: {Name:mk00852678dd73f2ef810c0eef8dad97789d5dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:11.831449   12704 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.5a01dc5f -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt
	I0731 23:32:11.844055   12704 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.5a01dc5f -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key
	I0731 23:32:11.846066   12704 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key
	I0731 23:32:11.846066   12704 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt with IP's: []
	I0731 23:32:12.044659   12704 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt ...
	I0731 23:32:12.044659   12704 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt: {Name:mkf17bc45409e188ebe8f4a2fe14c10413af178c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:12.046426   12704 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key ...
	I0731 23:32:12.046426   12704 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key: {Name:mkba87ca4af7c317aee8339103ad9432bd019100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:12.047433   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 23:32:12.047433   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 23:32:12.047433   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 23:32:12.047433   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 23:32:12.047433   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 23:32:12.048436   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 23:32:12.048436   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 23:32:12.058433   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 23:32:12.059445   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 23:32:12.059445   12704 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 23:32:12.060450   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 23:32:12.060450   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 23:32:12.060450   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 23:32:12.060450   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 23:32:12.061428   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 23:32:12.061428   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 23:32:12.061428   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:32:12.062431   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 23:32:12.063437   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:32:12.123780   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:32:12.176942   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:32:12.226993   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 23:32:12.274638   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 23:32:12.328580   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 23:32:12.378328   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:32:12.433107   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 23:32:12.477567   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 23:32:12.515637   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:32:12.556857   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 23:32:12.597132   12704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:32:12.637398   12704 ssh_runner.go:195] Run: openssl version
	I0731 23:32:12.645017   12704 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 23:32:12.655655   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 23:32:12.683234   12704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 23:32:12.690629   12704 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:32:12.690629   12704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:32:12.701403   12704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 23:32:12.708636   12704 command_runner.go:130] > 3ec20f2e
	I0731 23:32:12.720076   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:32:12.749949   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:32:12.778772   12704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:32:12.786258   12704 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:32:12.786258   12704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:32:12.797789   12704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:32:12.806262   12704 command_runner.go:130] > b5213941
	I0731 23:32:12.817038   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:32:12.846287   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 23:32:12.873705   12704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 23:32:12.880436   12704 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:32:12.881132   12704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:32:12.890924   12704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 23:32:12.898350   12704 command_runner.go:130] > 51391683
	I0731 23:32:12.909967   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 23:32:12.943186   12704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:32:12.948851   12704 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:32:12.948851   12704 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:32:12.948851   12704 kubeadm.go:392] StartCluster: {Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:32:12.960329   12704 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 23:32:12.997147   12704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 23:32:13.013865   12704 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0731 23:32:13.013865   12704 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0731 23:32:13.014619   12704 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0731 23:32:13.025187   12704 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 23:32:13.054691   12704 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:32:13.072551   12704 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0731 23:32:13.072551   12704 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0731 23:32:13.072551   12704 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0731 23:32:13.072551   12704 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:32:13.073537   12704 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:32:13.073537   12704 kubeadm.go:157] found existing configuration files:
	
	I0731 23:32:13.085151   12704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:32:13.102703   12704 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:32:13.103251   12704 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:32:13.114952   12704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:32:13.146853   12704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:32:13.165298   12704 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:32:13.165336   12704 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:32:13.178259   12704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:32:13.209489   12704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:32:13.224485   12704 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:32:13.224592   12704 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:32:13.238878   12704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:32:13.265353   12704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:32:13.280143   12704 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:32:13.281206   12704 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:32:13.293179   12704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:32:13.312589   12704 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 23:32:13.782067   12704 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:32:13.782067   12704 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:32:27.314721   12704 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 23:32:27.314721   12704 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0731 23:32:27.314721   12704 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 23:32:27.314721   12704 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 23:32:27.314721   12704 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 23:32:27.314721   12704 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 23:32:27.315475   12704 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 23:32:27.315475   12704 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 23:32:27.315894   12704 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 23:32:27.315975   12704 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 23:32:27.316458   12704 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 23:32:27.316458   12704 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 23:32:27.319583   12704 out.go:204]   - Generating certificates and keys ...
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-411400] and IPs [172.17.20.56 127.0.0.1 ::1]
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-411400] and IPs [172.17.20.56 127.0.0.1 ::1]
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-411400] and IPs [172.17.20.56 127.0.0.1 ::1]
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-411400] and IPs [172.17.20.56 127.0.0.1 ::1]
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 23:32:27.319583   12704 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0731 23:32:27.319583   12704 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 23:32:27.319583   12704 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 23:32:27.319583   12704 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 23:32:27.319583   12704 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 23:32:27.319583   12704 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 23:32:27.319583   12704 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 23:32:27.319583   12704 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 23:32:27.319583   12704 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 23:32:27.319583   12704 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 23:32:27.319583   12704 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 23:32:27.319583   12704 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 23:32:27.319583   12704 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 23:32:27.319583   12704 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 23:32:27.319583   12704 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 23:32:27.319583   12704 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 23:32:27.319583   12704 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 23:32:27.319583   12704 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 23:32:27.327817   12704 out.go:204]   - Booting up control plane ...
	I0731 23:32:27.328260   12704 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 23:32:27.328309   12704 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 23:32:27.328511   12704 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 23:32:27.328572   12704 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 23:32:27.328718   12704 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 23:32:27.328779   12704 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 23:32:27.328779   12704 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:32:27.328779   12704 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:32:27.328779   12704 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:32:27.328779   12704 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:32:27.328779   12704 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 23:32:27.328779   12704 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 23:32:27.328779   12704 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 23:32:27.328779   12704 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 23:32:27.328779   12704 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 23:32:27.328779   12704 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 23:32:27.328779   12704 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001696033s
	I0731 23:32:27.328779   12704 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001696033s
	I0731 23:32:27.328779   12704 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 23:32:27.328779   12704 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 23:32:27.328779   12704 command_runner.go:130] > [api-check] The API server is healthy after 7.002135189s
	I0731 23:32:27.328779   12704 kubeadm.go:310] [api-check] The API server is healthy after 7.002135189s
	I0731 23:32:27.328779   12704 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 23:32:27.328779   12704 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 23:32:27.328779   12704 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 23:32:27.328779   12704 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 23:32:27.328779   12704 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0731 23:32:27.328779   12704 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 23:32:27.328779   12704 kubeadm.go:310] [mark-control-plane] Marking the node multinode-411400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 23:32:27.328779   12704 command_runner.go:130] > [mark-control-plane] Marking the node multinode-411400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 23:32:27.328779   12704 kubeadm.go:310] [bootstrap-token] Using token: a0ream.4nu2up3nqz6axjjo
	I0731 23:32:27.328779   12704 command_runner.go:130] > [bootstrap-token] Using token: a0ream.4nu2up3nqz6axjjo
	I0731 23:32:27.333676   12704 out.go:204]   - Configuring RBAC rules ...
	I0731 23:32:27.333676   12704 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 23:32:27.333676   12704 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 23:32:27.333676   12704 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 23:32:27.333676   12704 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 23:32:27.333676   12704 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 23:32:27.333676   12704 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 23:32:27.338865   12704 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 23:32:27.338865   12704 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 23:32:27.339085   12704 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 23:32:27.339085   12704 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 23:32:27.339085   12704 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 23:32:27.339085   12704 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 23:32:27.339085   12704 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 23:32:27.339085   12704 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 23:32:27.339085   12704 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0731 23:32:27.339085   12704 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 23:32:27.339085   12704 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0731 23:32:27.339085   12704 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0731 23:32:27.339085   12704 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0731 23:32:27.339085   12704 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0731 23:32:27.339085   12704 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 23:32:27.339085   12704 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 23:32:27.339085   12704 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 23:32:27.339085   12704 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 23:32:27.339085   12704 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0731 23:32:27.339085   12704 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 23:32:27.339085   12704 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 23:32:27.339085   12704 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0731 23:32:27.339085   12704 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 23:32:27.339085   12704 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 23:32:27.339085   12704 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 23:32:27.339085   12704 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 23:32:27.339085   12704 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0731 23:32:27.339085   12704 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 23:32:27.339085   12704 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a0ream.4nu2up3nqz6axjjo \
	I0731 23:32:27.339085   12704 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token a0ream.4nu2up3nqz6axjjo \
	I0731 23:32:27.339085   12704 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf \
	I0731 23:32:27.339085   12704 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf \
	I0731 23:32:27.339085   12704 command_runner.go:130] > 	--control-plane 
	I0731 23:32:27.339085   12704 kubeadm.go:310] 	--control-plane 
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0731 23:32:27.339085   12704 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 23:32:27.339085   12704 kubeadm.go:310] 
	I0731 23:32:27.339085   12704 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a0ream.4nu2up3nqz6axjjo \
	I0731 23:32:27.339085   12704 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a0ream.4nu2up3nqz6axjjo \
	I0731 23:32:27.339085   12704 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf 
	I0731 23:32:27.339085   12704 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf 
	I0731 23:32:27.339085   12704 cni.go:84] Creating CNI manager for ""
	I0731 23:32:27.339085   12704 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 23:32:27.346869   12704 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 23:32:27.359399   12704 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 23:32:27.368408   12704 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0731 23:32:27.368408   12704 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0731 23:32:27.368408   12704 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0731 23:32:27.368408   12704 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:32:27.368408   12704 command_runner.go:130] > Access: 2024-07-31 23:30:33.376989900 +0000
	I0731 23:32:27.368408   12704 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0731 23:32:27.368408   12704 command_runner.go:130] > Change: 2024-07-31 23:30:24.916000000 +0000
	I0731 23:32:27.368408   12704 command_runner.go:130] >  Birth: -
	I0731 23:32:27.368920   12704 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 23:32:27.368920   12704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 23:32:27.415003   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 23:32:28.016357   12704 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0731 23:32:28.016443   12704 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0731 23:32:28.016443   12704 command_runner.go:130] > serviceaccount/kindnet created
	I0731 23:32:28.016520   12704 command_runner.go:130] > daemonset.apps/kindnet created
	I0731 23:32:28.016579   12704 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 23:32:28.031031   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-411400 minikube.k8s.io/updated_at=2024_07_31T23_32_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=multinode-411400 minikube.k8s.io/primary=true
	I0731 23:32:28.034040   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:28.073038   12704 command_runner.go:130] > -16
	I0731 23:32:28.073107   12704 ops.go:34] apiserver oom_adj: -16
	I0731 23:32:28.238906   12704 command_runner.go:130] > node/multinode-411400 labeled
	I0731 23:32:28.244024   12704 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0731 23:32:28.256294   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:28.365655   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:28.762825   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:28.861272   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:29.261034   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:29.359550   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:29.767127   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:29.865643   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:30.268366   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:30.384537   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:30.769600   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:30.876326   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:31.266140   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:31.362710   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:31.769195   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:31.873230   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:32.257615   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:32.372744   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:32.763800   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:32.874020   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:33.265066   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:33.374153   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:33.763490   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:33.865944   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:34.264852   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:34.372983   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:34.768238   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:34.863673   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:35.268893   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:35.369960   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:35.770781   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:35.871748   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:36.269552   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:36.373860   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:36.761518   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:36.900412   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:37.275234   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:37.407689   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:37.769723   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:37.864023   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:38.269103   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:38.387154   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:38.770740   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:38.890753   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:39.257304   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:39.358393   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:39.761296   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:39.861165   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:40.264765   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:40.375520   12704 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 23:32:40.765424   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 23:32:40.874236   12704 command_runner.go:130] > NAME      SECRETS   AGE
	I0731 23:32:40.875193   12704 command_runner.go:130] > default   0         0s
	I0731 23:32:40.875308   12704 kubeadm.go:1113] duration metric: took 12.8585179s to wait for elevateKubeSystemPrivileges
	I0731 23:32:40.875398   12704 kubeadm.go:394] duration metric: took 27.9261926s to StartCluster
	I0731 23:32:40.875398   12704 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:40.875621   12704 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:32:40.877631   12704 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:32:40.878939   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 23:32:40.879040   12704 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 23:32:40.879040   12704 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 23:32:40.879183   12704 addons.go:69] Setting storage-provisioner=true in profile "multinode-411400"
	I0731 23:32:40.879183   12704 addons.go:69] Setting default-storageclass=true in profile "multinode-411400"
	I0731 23:32:40.879466   12704 addons.go:234] Setting addon storage-provisioner=true in "multinode-411400"
	I0731 23:32:40.879528   12704 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:32:40.879466   12704 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-411400"
	I0731 23:32:40.879528   12704 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:32:40.880877   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:32:40.881386   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:32:40.885192   12704 out.go:177] * Verifying Kubernetes components...
	I0731 23:32:40.909489   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:32:41.272379   12704 command_runner.go:130] > apiVersion: v1
	I0731 23:32:41.272379   12704 command_runner.go:130] > data:
	I0731 23:32:41.272379   12704 command_runner.go:130] >   Corefile: |
	I0731 23:32:41.272789   12704 command_runner.go:130] >     .:53 {
	I0731 23:32:41.272789   12704 command_runner.go:130] >         errors
	I0731 23:32:41.272789   12704 command_runner.go:130] >         health {
	I0731 23:32:41.272789   12704 command_runner.go:130] >            lameduck 5s
	I0731 23:32:41.272789   12704 command_runner.go:130] >         }
	I0731 23:32:41.272789   12704 command_runner.go:130] >         ready
	I0731 23:32:41.272789   12704 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0731 23:32:41.272850   12704 command_runner.go:130] >            pods insecure
	I0731 23:32:41.272850   12704 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0731 23:32:41.272850   12704 command_runner.go:130] >            ttl 30
	I0731 23:32:41.272850   12704 command_runner.go:130] >         }
	I0731 23:32:41.272850   12704 command_runner.go:130] >         prometheus :9153
	I0731 23:32:41.272850   12704 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0731 23:32:41.272944   12704 command_runner.go:130] >            max_concurrent 1000
	I0731 23:32:41.272944   12704 command_runner.go:130] >         }
	I0731 23:32:41.272944   12704 command_runner.go:130] >         cache 30
	I0731 23:32:41.272944   12704 command_runner.go:130] >         loop
	I0731 23:32:41.273011   12704 command_runner.go:130] >         reload
	I0731 23:32:41.273011   12704 command_runner.go:130] >         loadbalance
	I0731 23:32:41.273011   12704 command_runner.go:130] >     }
	I0731 23:32:41.273011   12704 command_runner.go:130] > kind: ConfigMap
	I0731 23:32:41.273081   12704 command_runner.go:130] > metadata:
	I0731 23:32:41.273081   12704 command_runner.go:130] >   creationTimestamp: "2024-07-31T23:32:26Z"
	I0731 23:32:41.273081   12704 command_runner.go:130] >   name: coredns
	I0731 23:32:41.273081   12704 command_runner.go:130] >   namespace: kube-system
	I0731 23:32:41.273081   12704 command_runner.go:130] >   resourceVersion: "227"
	I0731 23:32:41.273145   12704 command_runner.go:130] >   uid: cd3ffb22-71a5-4e20-8971-eea51aad6a1b
	I0731 23:32:41.274932   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 23:32:41.298579   12704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:32:41.694249   12704 command_runner.go:130] > configmap/coredns replaced
	I0731 23:32:41.694472   12704 start.go:971] {"host.minikube.internal": 172.17.16.1} host record injected into CoreDNS's ConfigMap
	I0731 23:32:41.695669   12704 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:32:41.695669   12704 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:32:41.696498   12704 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.20.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:32:41.696498   12704 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.20.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:32:41.698273   12704 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 23:32:41.698655   12704 node_ready.go:35] waiting up to 6m0s for node "multinode-411400" to be "Ready" ...
	I0731 23:32:41.698655   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:41.698655   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:41.698655   12704 round_trippers.go:463] GET https://172.17.20.56:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 23:32:41.698655   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:41.699191   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:41.699263   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:41.698655   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:41.699263   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:41.710080   12704 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 23:32:41.710080   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:41.710080   12704 round_trippers.go:580]     Audit-Id: 45cb224c-b86c-40a3-a072-cf360ae6488b
	I0731 23:32:41.710080   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:41.710080   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:41.710080   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:41.710080   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:41.710080   12704 round_trippers.go:580]     Content-Length: 291
	I0731 23:32:41.710080   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:41 GMT
	I0731 23:32:41.710080   12704 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"66154797-1f19-453b-abc8-dd8f19083ef4","resourceVersion":"361","creationTimestamp":"2024-07-31T23:32:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 23:32:41.711081   12704 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 23:32:41.711081   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:41.711081   12704 round_trippers.go:580]     Audit-Id: d38b8118-bed6-4337-aa8f-57e52ca18742
	I0731 23:32:41.711081   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:41.711081   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:41.711081   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:41.711081   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:41.711081   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:41 GMT
	I0731 23:32:41.711081   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:41.711081   12704 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"66154797-1f19-453b-abc8-dd8f19083ef4","resourceVersion":"361","creationTimestamp":"2024-07-31T23:32:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 23:32:41.712068   12704 round_trippers.go:463] PUT https://172.17.20.56:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 23:32:41.712068   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:41.712068   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:41.712068   12704 round_trippers.go:473]     Content-Type: application/json
	I0731 23:32:41.712068   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:41.725079   12704 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0731 23:32:41.725741   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:41.725741   12704 round_trippers.go:580]     Audit-Id: 576a1086-ec4a-4f11-9899-2b21d45eaf35
	I0731 23:32:41.725741   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:41.725741   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:41.725741   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:41.725741   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:41.725828   12704 round_trippers.go:580]     Content-Length: 291
	I0731 23:32:41.725869   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:41 GMT
	I0731 23:32:41.725869   12704 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"66154797-1f19-453b-abc8-dd8f19083ef4","resourceVersion":"363","creationTimestamp":"2024-07-31T23:32:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 23:32:42.212350   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:42.212350   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:42.212671   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:42.212671   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:42.212350   12704 round_trippers.go:463] GET https://172.17.20.56:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 23:32:42.212671   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:42.212671   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:42.212671   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:42.219154   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:32:42.219240   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Audit-Id: e1383bef-40d2-4878-93ee-a28e2aea958b
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:42.219240   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:42.219240   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Content-Length: 291
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:42 GMT
	I0731 23:32:42.219240   12704 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"66154797-1f19-453b-abc8-dd8f19083ef4","resourceVersion":"373","creationTimestamp":"2024-07-31T23:32:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0731 23:32:42.219240   12704 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-411400" context rescaled to 1 replicas
	I0731 23:32:42.219240   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:32:42.219240   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:42.219240   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:42.219240   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:42 GMT
	I0731 23:32:42.219240   12704 round_trippers.go:580]     Audit-Id: 90365c22-4153-4795-875f-d0c3d93aff63
	I0731 23:32:42.219240   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:42.702534   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:42.702740   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:42.702740   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:42.702740   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:42.706901   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:32:42.706901   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:42.706901   12704 round_trippers.go:580]     Audit-Id: 3b090236-880f-4f73-a65c-1cf55fc44a97
	I0731 23:32:42.706901   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:42.706901   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:42.706901   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:42.706901   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:42.706901   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:42 GMT
	I0731 23:32:42.706901   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:43.206725   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:43.206798   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:43.206858   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:43.206858   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:43.218438   12704 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 23:32:43.219333   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:43.219333   12704 round_trippers.go:580]     Audit-Id: e2b4ae8b-705c-4b9b-a2b9-1c67d52357a1
	I0731 23:32:43.219333   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:43.219333   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:43.219420   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:43.219420   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:43.219420   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:43 GMT
	I0731 23:32:43.219467   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:43.241637   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:32:43.241693   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:32:43.242666   12704 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:32:43.243392   12704 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.20.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:32:43.244683   12704 addons.go:234] Setting addon default-storageclass=true in "multinode-411400"
	I0731 23:32:43.244843   12704 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:32:43.245412   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:32:43.251995   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:32:43.251995   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:32:43.254893   12704 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:32:43.257595   12704 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:32:43.257595   12704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 23:32:43.257595   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:32:43.715112   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:43.715219   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:43.715219   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:43.715219   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:43.718509   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:43.719419   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:43.719537   12704 round_trippers.go:580]     Audit-Id: e11c5c9a-a241-4d0e-9aab-9ded7bc00d6d
	I0731 23:32:43.719537   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:43.719603   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:43.719603   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:43.719644   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:43.719644   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:43 GMT
	I0731 23:32:43.720115   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:43.721086   12704 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:32:44.205511   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:44.205511   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:44.205511   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:44.205511   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:44.212769   12704 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 23:32:44.213465   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:44.213512   12704 round_trippers.go:580]     Audit-Id: 31a072c5-c1fd-476b-80cb-b8eb27232a25
	I0731 23:32:44.213512   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:44.213512   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:44.213512   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:44.213512   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:44.213512   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:44 GMT
	I0731 23:32:44.213859   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:44.710500   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:44.710648   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:44.710648   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:44.710648   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:44.714865   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:32:44.714865   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:44.715303   12704 round_trippers.go:580]     Audit-Id: dce257a8-8306-4c8e-962e-2d4abf2cd6cc
	I0731 23:32:44.715303   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:44.715303   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:44.715303   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:44.715303   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:44.715406   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:44 GMT
	I0731 23:32:44.716159   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:45.213903   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:45.213903   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:45.213903   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:45.213903   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:45.216524   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:32:45.216524   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:45.216524   12704 round_trippers.go:580]     Audit-Id: 0fea6b83-9002-428c-95e4-24efbaad7f73
	I0731 23:32:45.216524   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:45.216524   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:45.216524   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:45.216524   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:45.216524   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:45 GMT
	I0731 23:32:45.217522   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:45.614084   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:32:45.614084   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:32:45.614249   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:32:45.614608   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:32:45.615151   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:32:45.615399   12704 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 23:32:45.615480   12704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 23:32:45.615595   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:32:45.701984   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:45.702061   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:45.702061   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:45.702061   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:45.705717   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:45.705812   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:45.705812   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:45.705812   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:45 GMT
	I0731 23:32:45.705812   12704 round_trippers.go:580]     Audit-Id: 9cb3e7a8-9a5c-4d50-bf89-637f7d7e59c0
	I0731 23:32:45.705993   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:45.706026   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:45.706026   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:45.706741   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:46.214014   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:46.214014   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:46.214104   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:46.214104   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:46.218521   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:32:46.218662   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:46.218662   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:46 GMT
	I0731 23:32:46.218662   12704 round_trippers.go:580]     Audit-Id: 0a61c69b-4a9d-44ad-89d6-ca9d2ff9c107
	I0731 23:32:46.218662   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:46.218662   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:46.218662   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:46.218662   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:46.219079   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:46.219636   12704 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:32:46.705397   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:46.705453   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:46.705453   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:46.705453   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:46.716268   12704 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 23:32:46.716451   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:46.716451   12704 round_trippers.go:580]     Audit-Id: 32130faf-4f5d-4b88-b009-04a06107ffe3
	I0731 23:32:46.716451   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:46.716536   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:46.716536   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:46.716566   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:46.716566   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:46 GMT
	I0731 23:32:46.716997   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:47.209475   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:47.209809   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:47.209809   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:47.209809   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:47.514929   12704 round_trippers.go:574] Response Status: 200 OK in 304 milliseconds
	I0731 23:32:47.515037   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:47.515149   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:47 GMT
	I0731 23:32:47.515370   12704 round_trippers.go:580]     Audit-Id: 68e8fc50-8f1e-4715-b7fd-361938ed3f98
	I0731 23:32:47.515370   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:47.515370   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:47.515370   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:47.515459   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:47.515842   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:47.713595   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:47.713655   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:47.713655   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:47.713655   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:47.716704   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:47.716704   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:47.716704   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:47.716704   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:47 GMT
	I0731 23:32:47.717351   12704 round_trippers.go:580]     Audit-Id: 405601ad-a2a5-4092-ac67-098b3f11698b
	I0731 23:32:47.717351   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:47.717351   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:47.717351   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:47.717639   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:47.825077   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:32:47.825077   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:32:47.825606   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:32:48.206493   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:48.206780   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:48.206780   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:48.206780   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:48.209987   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:48.209987   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:48.209987   12704 round_trippers.go:580]     Audit-Id: 139fcc53-6d4f-4169-b2cc-0f1330190405
	I0731 23:32:48.209987   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:48.209987   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:48.209987   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:48.209987   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:48.209987   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:48 GMT
	I0731 23:32:48.211095   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:48.260613   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:32:48.260613   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:32:48.260613   12704 sshutil.go:53] new ssh client: &{IP:172.17.20.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:32:48.395894   12704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:32:48.710866   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:48.710866   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:48.710866   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:48.710866   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:48.714591   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:48.714591   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:48.714591   12704 round_trippers.go:580]     Audit-Id: 61a76f6b-57fd-4ea0-8643-9ae04852c00a
	I0731 23:32:48.714591   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:48.714591   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:48.714591   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:48.714591   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:48.714591   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:48 GMT
	I0731 23:32:48.715243   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:48.715623   12704 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:32:49.204734   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:49.204734   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:49.204734   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:49.204734   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:49.375486   12704 round_trippers.go:574] Response Status: 200 OK in 170 milliseconds
	I0731 23:32:49.375486   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:49.375575   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:49.375575   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:49.375575   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:49 GMT
	I0731 23:32:49.375575   12704 round_trippers.go:580]     Audit-Id: 13b7cdb4-a025-46c5-bab5-8e50873ee092
	I0731 23:32:49.375575   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:49.375575   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:49.375575   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:49.444388   12704 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0731 23:32:49.444515   12704 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0731 23:32:49.444515   12704 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0731 23:32:49.444515   12704 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0731 23:32:49.444632   12704 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0731 23:32:49.444632   12704 command_runner.go:130] > pod/storage-provisioner created
	I0731 23:32:49.444708   12704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048725s)
	I0731 23:32:49.711627   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:49.711689   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:49.711747   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:49.711747   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:49.720675   12704 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 23:32:49.720675   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:49.720675   12704 round_trippers.go:580]     Audit-Id: 0b3beb7e-a05a-4ddf-83cd-62eb42cb8ed9
	I0731 23:32:49.720675   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:49.720675   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:49.720675   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:49.720675   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:49.720675   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:49 GMT
	I0731 23:32:49.721012   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:50.201661   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:50.201743   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:50.201743   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:50.201811   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:50.207951   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:32:50.208014   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:50.208052   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:50 GMT
	I0731 23:32:50.208052   12704 round_trippers.go:580]     Audit-Id: 48b2e40f-6318-4e92-9488-07c58a616078
	I0731 23:32:50.208052   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:50.208052   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:50.208052   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:50.208052   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:50.208052   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:50.432624   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:32:50.432624   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:32:50.434021   12704 sshutil.go:53] new ssh client: &{IP:172.17.20.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:32:50.558628   12704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 23:32:50.694163   12704 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0731 23:32:50.694163   12704 round_trippers.go:463] GET https://172.17.20.56:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 23:32:50.694163   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:50.694163   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:50.694163   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:50.697774   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:50.697843   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:50.697843   12704 round_trippers.go:580]     Audit-Id: 88f05d7e-2ad4-4256-858d-22481a02c504
	I0731 23:32:50.697843   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:50.697843   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:50.697915   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:50.697915   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:50.697915   12704 round_trippers.go:580]     Content-Length: 1273
	I0731 23:32:50.697915   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:50 GMT
	I0731 23:32:50.697983   12704 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"standard","uid":"98bb6bda-7959-432b-89e6-e7b8fe4d7a11","resourceVersion":"401","creationTimestamp":"2024-07-31T23:32:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-31T23:32:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0731 23:32:50.698824   12704 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"98bb6bda-7959-432b-89e6-e7b8fe4d7a11","resourceVersion":"401","creationTimestamp":"2024-07-31T23:32:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-31T23:32:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0731 23:32:50.698824   12704 round_trippers.go:463] PUT https://172.17.20.56:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 23:32:50.698824   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:50.698824   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:50.698824   12704 round_trippers.go:473]     Content-Type: application/json
	I0731 23:32:50.698824   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:50.699223   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:50.699223   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:50.699223   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:50.699223   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:50.701816   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:32:50.701816   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:50.701816   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:50.701816   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Content-Length: 1220
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:50 GMT
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Audit-Id: 9ea54504-b839-40b4-bfc0-2cfd1543544f
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:50.701816   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:32:50.701816   12704 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"98bb6bda-7959-432b-89e6-e7b8fe4d7a11","resourceVersion":"401","creationTimestamp":"2024-07-31T23:32:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-31T23:32:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0731 23:32:50.701816   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Audit-Id: e743e1e8-839b-4514-aa39-7a5d43f6c175
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:50.701816   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:50.701816   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:50.701816   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:50 GMT
	I0731 23:32:50.701816   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:50.708717   12704 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 23:32:50.711157   12704 addons.go:510] duration metric: took 9.831993s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 23:32:51.202326   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:51.202326   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:51.202326   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:51.202326   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:51.206938   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:32:51.206938   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:51.207187   12704 round_trippers.go:580]     Audit-Id: 2358f57f-d0e1-4413-b928-1d48b405484a
	I0731 23:32:51.207187   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:51.207187   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:51.207187   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:51.207187   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:51.207187   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:51 GMT
	I0731 23:32:51.207645   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:51.208321   12704 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:32:51.699906   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:51.700027   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:51.700027   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:51.700027   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:51.703395   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:32:51.703422   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:51.703422   12704 round_trippers.go:580]     Audit-Id: 43cda691-0dfd-4421-b02a-019a9b5b7715
	I0731 23:32:51.703422   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:51.703422   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:51.703422   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:51.703422   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:51.703422   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:51 GMT
	I0731 23:32:51.703626   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:52.200815   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:52.201007   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:52.201007   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:52.201007   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:52.207478   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:32:52.207478   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:52.207478   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:52.207478   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:52 GMT
	I0731 23:32:52.207478   12704 round_trippers.go:580]     Audit-Id: e2de0785-adb0-436b-a07b-006427d17b9c
	I0731 23:32:52.207695   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:52.207695   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:52.207695   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:52.207732   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:52.713355   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:52.713355   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:52.713617   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:52.713617   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:52.717092   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:52.717092   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:52.717092   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:52.717192   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:52.717192   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:52.717192   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:52 GMT
	I0731 23:32:52.717192   12704 round_trippers.go:580]     Audit-Id: cee91ccc-77e7-47bb-a2e3-8b7f6dcf8c6f
	I0731 23:32:52.717192   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:52.717435   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:53.211957   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:53.212049   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:53.212110   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:53.212110   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:53.218737   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:32:53.218737   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:53.218737   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:53.218737   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:53 GMT
	I0731 23:32:53.218737   12704 round_trippers.go:580]     Audit-Id: 83721d82-1381-4bb3-8851-65656b53ff95
	I0731 23:32:53.218737   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:53.218737   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:53.218737   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:53.219391   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:53.219452   12704 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:32:53.710390   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:53.710527   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:53.710527   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:53.710527   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:53.714915   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:32:53.714915   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:53.714915   12704 round_trippers.go:580]     Audit-Id: 32117543-8356-4afc-b169-6ff42b82044a
	I0731 23:32:53.715187   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:53.715187   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:53.715187   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:53.715187   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:53.715187   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:53 GMT
	I0731 23:32:53.715454   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:54.209217   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:54.209312   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:54.209312   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:54.209312   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:54.213733   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:32:54.213733   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:54.213889   12704 round_trippers.go:580]     Audit-Id: fa2a8063-2674-4255-b23a-61814b18cca6
	I0731 23:32:54.213889   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:54.213889   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:54.213889   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:54.213889   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:54.213889   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:54 GMT
	I0731 23:32:54.214349   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:54.710179   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:54.710297   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:54.710297   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:54.710297   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:54.712848   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:32:54.712848   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:54.712848   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:54.712848   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:54.712848   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:54 GMT
	I0731 23:32:54.713862   12704 round_trippers.go:580]     Audit-Id: db45822a-600f-4dd9-8b05-e9d5136ff742
	I0731 23:32:54.713862   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:54.713862   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:54.714214   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:55.209290   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:55.209470   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:55.209470   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:55.209470   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:55.213104   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:55.213392   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:55.213392   12704 round_trippers.go:580]     Audit-Id: b51ad74c-60ac-4e6b-ad45-11b70f0197d9
	I0731 23:32:55.213467   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:55.213467   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:55.213467   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:55.213467   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:55.213467   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:55 GMT
	I0731 23:32:55.213732   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:55.707790   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:55.707859   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:55.707859   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:55.707859   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:55.710156   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:32:55.710156   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:55.710156   12704 round_trippers.go:580]     Audit-Id: ea58468f-e1e2-4a6a-80ce-4a125a40d27f
	I0731 23:32:55.710156   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:55.710156   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:55.710156   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:55.710156   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:55.710156   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:55 GMT
	I0731 23:32:55.711049   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:55.711583   12704 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:32:56.204489   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:56.204489   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:56.204489   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:56.204489   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:56.207990   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:56.207990   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:56.207990   12704 round_trippers.go:580]     Audit-Id: 75b7c597-4f77-4280-9a60-523c16a47901
	I0731 23:32:56.207990   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:56.207990   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:56.207990   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:56.207990   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:56.207990   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:56 GMT
	I0731 23:32:56.208968   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:56.704572   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:56.704572   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:56.704572   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:56.704572   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:56.707132   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:32:56.707304   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:56.707304   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:56 GMT
	I0731 23:32:56.707304   12704 round_trippers.go:580]     Audit-Id: a763d6aa-8363-4973-a2f4-710c4e201933
	I0731 23:32:56.707304   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:56.707304   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:56.707304   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:56.707304   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:56.708086   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:57.204670   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:57.204863   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:57.204863   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:57.204863   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:57.209388   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:57.209388   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:57.209388   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:57.209388   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:57.209388   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:57 GMT
	I0731 23:32:57.209456   12704 round_trippers.go:580]     Audit-Id: 66cf79cd-cc17-4776-89fa-acf5b1073c27
	I0731 23:32:57.209456   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:57.209456   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:57.210578   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:57.707594   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:57.707594   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:57.707594   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:57.707594   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:57.711822   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:32:57.711822   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:57.711822   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:57.711822   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:57.711822   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:57 GMT
	I0731 23:32:57.711822   12704 round_trippers.go:580]     Audit-Id: d1c0793c-ee7a-48fc-9cb8-a3695aed0caf
	I0731 23:32:57.711822   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:57.711822   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:57.712847   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"314","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0731 23:32:57.713665   12704 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:32:58.203614   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:58.203677   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:58.203677   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:58.203677   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:58.214252   12704 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 23:32:58.214252   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:58.214252   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:58.214392   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:58 GMT
	I0731 23:32:58.214392   12704 round_trippers.go:580]     Audit-Id: 3bdb9ec3-8e4b-47fe-a484-5b359a83ff82
	I0731 23:32:58.214392   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:58.214392   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:58.214392   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:58.214450   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"405","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 5103 chars]
	I0731 23:32:58.701227   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:58.701227   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:58.701227   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:58.701227   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:58.704058   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:32:58.704058   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:58.704058   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:58.704058   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:58.704058   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:58 GMT
	I0731 23:32:58.704058   12704 round_trippers.go:580]     Audit-Id: 52e831ed-5fa5-45a2-8a37-a8b0a7ee6be5
	I0731 23:32:58.704058   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:58.704058   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:58.705319   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"405","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 5103 chars]
	I0731 23:32:59.200477   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:59.200602   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:59.200602   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:59.200602   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:59.204058   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:59.204058   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:59.204058   12704 round_trippers.go:580]     Audit-Id: 9075c5c8-f68f-47f2-8485-3a973238445e
	I0731 23:32:59.204058   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:59.204058   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:59.205012   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:59.205012   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:59.205012   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:59 GMT
	I0731 23:32:59.205232   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"405","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 5103 chars]
	I0731 23:32:59.700150   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:32:59.700150   12704 round_trippers.go:469] Request Headers:
	I0731 23:32:59.700150   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:32:59.700150   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:32:59.703810   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:32:59.703893   12704 round_trippers.go:577] Response Headers:
	I0731 23:32:59.703893   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:32:59 GMT
	I0731 23:32:59.703893   12704 round_trippers.go:580]     Audit-Id: b7bebd7e-3b0a-4d72-8258-b12e9e48825f
	I0731 23:32:59.703893   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:32:59.703893   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:32:59.703893   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:32:59.703893   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:32:59.704019   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"405","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 5103 chars]
	I0731 23:33:00.213825   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:00.213825   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:00.213825   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:00.213825   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:00.217661   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:33:00.217866   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:00.217866   12704 round_trippers.go:580]     Audit-Id: 77254a5e-99aa-467c-ab90-8b6a40738348
	I0731 23:33:00.217866   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:00.217866   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:00.217866   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:00.217866   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:00.217866   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:00 GMT
	I0731 23:33:00.217866   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"405","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 5103 chars]
	I0731 23:33:00.218563   12704 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:33:00.709994   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:00.710310   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:00.710310   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:00.710310   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:00.715500   12704 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:33:00.715587   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:00.715587   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:00.715587   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:00.715638   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:00.715638   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:00.715638   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:00 GMT
	I0731 23:33:00.715638   12704 round_trippers.go:580]     Audit-Id: 6257b277-6b63-44ef-9664-38781dae3fa3
	I0731 23:33:00.715936   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"405","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 5103 chars]
	I0731 23:33:01.211562   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:01.211562   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:01.211562   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:01.211562   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:01.215243   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:33:01.215243   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:01.215243   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:01.215675   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:01.215675   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:01.215675   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:01.215675   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:01 GMT
	I0731 23:33:01.215675   12704 round_trippers.go:580]     Audit-Id: 149ec0d4-0687-4ae0-a543-0a5b4f153979
	I0731 23:33:01.216116   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:01.216689   12704 node_ready.go:49] node "multinode-411400" has status "Ready":"True"
	I0731 23:33:01.216689   12704 node_ready.go:38] duration metric: took 19.517788s for node "multinode-411400" to be "Ready" ...
	I0731 23:33:01.216689   12704 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:33:01.216689   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods
	I0731 23:33:01.216689   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:01.216689   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:01.216689   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:01.220821   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:33:01.220903   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:01.220903   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:01 GMT
	I0731 23:33:01.220950   12704 round_trippers.go:580]     Audit-Id: 0505c65a-dd0d-47bb-b50a-be6f3493cf40
	I0731 23:33:01.220950   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:01.220950   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:01.220950   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:01.220950   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:01.223050   12704 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"414","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0731 23:33:01.228223   12704 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:01.228311   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:33:01.228400   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:01.228400   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:01.228400   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:01.231248   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:01.231248   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:01.231248   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:01.231248   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:01.231248   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:01.231248   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:01.231248   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:01 GMT
	I0731 23:33:01.231248   12704 round_trippers.go:580]     Audit-Id: bd188976-b627-48ba-ab7a-448159facb68
	I0731 23:33:01.231887   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"414","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0731 23:33:01.232114   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:01.232114   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:01.232114   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:01.232114   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:01.236643   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:33:01.236769   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:01.236769   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:01.236812   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:01 GMT
	I0731 23:33:01.236858   12704 round_trippers.go:580]     Audit-Id: bd7729f1-5b62-4a81-8dd6-b8b5c06d1956
	I0731 23:33:01.236858   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:01.236858   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:01.236858   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:01.237691   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:01.744178   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:33:01.744178   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:01.744178   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:01.744178   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:01.747570   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:01.747609   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:01.747609   12704 round_trippers.go:580]     Audit-Id: da9f1ab3-f7f9-4141-9e22-015203caf39b
	I0731 23:33:01.747609   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:01.747609   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:01.747609   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:01.747609   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:01.747609   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:01 GMT
	I0731 23:33:01.748201   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"414","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0731 23:33:01.748956   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:01.749096   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:01.749096   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:01.749096   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:01.751411   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:01.751411   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:01.751411   12704 round_trippers.go:580]     Audit-Id: f29da2c6-2874-4e76-beed-37fdb1ad980e
	I0731 23:33:01.751411   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:01.751411   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:01.751411   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:01.751411   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:01.752359   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:01 GMT
	I0731 23:33:01.752888   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:02.233903   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:33:02.233978   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:02.233978   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:02.233978   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:02.241319   12704 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 23:33:02.241782   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:02.241782   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:02.241782   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:02.241782   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:02 GMT
	I0731 23:33:02.241782   12704 round_trippers.go:580]     Audit-Id: 957a5455-44be-49db-bf83-fa89b8e272be
	I0731 23:33:02.241782   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:02.241782   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:02.241782   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"414","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0731 23:33:02.242728   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:02.242759   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:02.242759   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:02.242759   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:02.246144   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:33:02.246144   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:02.246144   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:02.246144   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:02.246144   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:02 GMT
	I0731 23:33:02.246144   12704 round_trippers.go:580]     Audit-Id: 3dff153f-b158-43e6-8d67-3732410062ea
	I0731 23:33:02.246144   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:02.246144   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:02.246144   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:02.739053   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:33:02.739112   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:02.739172   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:02.739172   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:02.741931   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:02.741931   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:02.741931   12704 round_trippers.go:580]     Audit-Id: 992fd030-1ed7-447a-a7d5-1ab9036251be
	I0731 23:33:02.741931   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:02.742526   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:02.742526   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:02.742526   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:02.742526   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:02 GMT
	I0731 23:33:02.742829   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"414","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0731 23:33:02.743538   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:02.743538   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:02.743538   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:02.743538   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:02.746116   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:02.746116   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:02.746116   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:02.746116   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:02.746312   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:02 GMT
	I0731 23:33:02.746312   12704 round_trippers.go:580]     Audit-Id: 5f054e58-72c0-4b80-b347-2afd13c9f305
	I0731 23:33:02.746312   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:02.746312   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:02.746755   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:03.242751   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:33:03.242751   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.242751   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.242751   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.247124   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:33:03.247220   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.247220   12704 round_trippers.go:580]     Audit-Id: d37e2345-3dee-452c-90b1-2a6ab9fdc6ef
	I0731 23:33:03.247220   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.247220   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.247273   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.247298   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.247298   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.248552   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"427","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0731 23:33:03.249356   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:03.249356   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.249356   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.249356   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.253691   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:33:03.253691   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.253691   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.253691   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.253691   12704 round_trippers.go:580]     Audit-Id: 671c7210-59b7-415a-8fff-56b217614ce1
	I0731 23:33:03.253691   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.253691   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.253691   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.253691   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:03.253691   12704 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"True"
	I0731 23:33:03.253691   12704 pod_ready.go:81] duration metric: took 2.0254425s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.253691   12704 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.254829   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-411400
	I0731 23:33:03.254829   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.254829   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.254829   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.257879   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:33:03.257879   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.258178   12704 round_trippers.go:580]     Audit-Id: 446924ab-e7fb-4a1a-ab3a-81c0088aa963
	I0731 23:33:03.258178   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.258178   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.258178   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.258178   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.258224   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.258407   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"d1476f05-7d77-424f-b5b3-c4c29f539af6","resourceVersion":"384","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.20.56:2379","kubernetes.io/config.hash":"5ce972ac835dbc580b580a401b4d452c","kubernetes.io/config.mirror":"5ce972ac835dbc580b580a401b4d452c","kubernetes.io/config.seen":"2024-07-31T23:32:26.731474656Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0731 23:33:03.258919   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:03.258919   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.258919   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.258919   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.261933   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:03.261933   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.261933   12704 round_trippers.go:580]     Audit-Id: 046819ee-e51a-4630-9dad-75fbda7ab9b9
	I0731 23:33:03.262128   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.262128   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.262128   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.262128   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.262128   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.262375   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:03.262731   12704 pod_ready.go:92] pod "etcd-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:33:03.262731   12704 pod_ready.go:81] duration metric: took 9.0394ms for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.262731   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.262850   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-411400
	I0731 23:33:03.262905   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.262905   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.262905   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.264334   12704 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 23:33:03.264334   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.265337   12704 round_trippers.go:580]     Audit-Id: ad619b09-5d8b-4ecc-9b30-a075c764a4aa
	I0731 23:33:03.265337   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.265337   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.265337   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.265337   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.265337   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.265337   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-411400","namespace":"kube-system","uid":"fd9ca41e-c7ca-416e-b00e-b6cf76e4c434","resourceVersion":"385","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.20.56:8443","kubernetes.io/config.hash":"5f6d87c3026905e576dd63c1bfb6b167","kubernetes.io/config.mirror":"5f6d87c3026905e576dd63c1bfb6b167","kubernetes.io/config.seen":"2024-07-31T23:32:26.731475956Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0731 23:33:03.265337   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:03.266074   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.266074   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.266074   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.267916   12704 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 23:33:03.267916   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.267916   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.268919   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.268940   12704 round_trippers.go:580]     Audit-Id: 52fbb88b-7773-49e6-ac97-85f31bd09778
	I0731 23:33:03.268940   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.268940   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.268940   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.269089   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:03.269386   12704 pod_ready.go:92] pod "kube-apiserver-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:33:03.269386   12704 pod_ready.go:81] duration metric: took 6.5896ms for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.269386   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.269386   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400
	I0731 23:33:03.269386   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.269386   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.269386   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.272135   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:03.272135   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.272135   12704 round_trippers.go:580]     Audit-Id: 0f5d0897-850a-4234-98c3-404c8593c97e
	I0731 23:33:03.272135   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.272135   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.272855   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.272855   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.272855   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.273042   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-411400","namespace":"kube-system","uid":"217a4087-49b2-4b74-a094-e027a51cf503","resourceVersion":"386","creationTimestamp":"2024-07-31T23:32:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.mirror":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.seen":"2024-07-31T23:32:18.716560513Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0731 23:33:03.273573   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:03.273573   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.273573   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.273763   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.277654   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:33:03.277654   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.277654   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.277654   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.277740   12704 round_trippers.go:580]     Audit-Id: d2c7310f-3479-4b8b-8ee5-6d68f15d0e33
	I0731 23:33:03.277740   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.277758   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.277758   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.278960   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:03.279624   12704 pod_ready.go:92] pod "kube-controller-manager-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:33:03.279679   12704 pod_ready.go:81] duration metric: took 10.2934ms for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.279679   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.279769   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:33:03.279844   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.279844   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.279869   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.282842   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:03.282842   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.282842   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.282842   12704 round_trippers.go:580]     Audit-Id: da133327-0a4d-4c73-97cd-0546de9fc221
	I0731 23:33:03.282842   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.282842   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.282842   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.282842   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.282842   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-chdxg","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3405391-f4cb-4ffe-8d51-d669e37d0a3b","resourceVersion":"379","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0731 23:33:03.283608   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:03.283608   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.283608   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.283608   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.286190   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:33:03.286190   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.286190   12704 round_trippers.go:580]     Audit-Id: 8eefe01c-0236-4cd6-bfba-934abdfdda72
	I0731 23:33:03.286190   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.286639   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.286639   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.286639   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.286639   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.286792   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:03.287202   12704 pod_ready.go:92] pod "kube-proxy-chdxg" in "kube-system" namespace has status "Ready":"True"
	I0731 23:33:03.287202   12704 pod_ready.go:81] duration metric: took 7.5224ms for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.287256   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.446875   12704 request.go:629] Waited for 159.4241ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:33:03.446875   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:33:03.446875   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.446875   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.446875   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.446875   12704 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 23:33:03.446875   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.446875   12704 round_trippers.go:580]     Audit-Id: 4c445c87-bcd4-4b5e-9297-ae1b45a06acf
	I0731 23:33:03.446875   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.446875   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.446875   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.446875   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.446875   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.446875   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-411400","namespace":"kube-system","uid":"a10cf66c-3049-48d4-9ab1-8667efc59977","resourceVersion":"383","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.mirror":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.seen":"2024-07-31T23:32:26.731395457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0731 23:33:03.650176   12704 request.go:629] Waited for 203.2987ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:03.650384   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:33:03.650384   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.650418   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.650487   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.657334   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:33:03.657334   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.657334   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.657334   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.657334   12704 round_trippers.go:580]     Audit-Id: 6ebfd48a-2ea7-4a0f-b1f0-62f7584d14d6
	I0731 23:33:03.657334   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.657334   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.657334   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.658758   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:33:03.658944   12704 pod_ready.go:92] pod "kube-scheduler-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:33:03.658944   12704 pod_ready.go:81] duration metric: took 371.6834ms for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:33:03.658944   12704 pod_ready.go:38] duration metric: took 2.4422246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:33:03.658944   12704 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:33:03.671923   12704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:33:03.700016   12704 command_runner.go:130] > 2165
	I0731 23:33:03.700016   12704 api_server.go:72] duration metric: took 22.8205457s to wait for apiserver process to appear ...
	I0731 23:33:03.700016   12704 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:33:03.700016   12704 api_server.go:253] Checking apiserver healthz at https://172.17.20.56:8443/healthz ...
	I0731 23:33:03.708489   12704 api_server.go:279] https://172.17.20.56:8443/healthz returned 200:
	ok
	I0731 23:33:03.709168   12704 round_trippers.go:463] GET https://172.17.20.56:8443/version
	I0731 23:33:03.709207   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.709207   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.709207   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.710004   12704 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 23:33:03.711075   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.711075   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.711158   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.711158   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.711158   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.711158   12704 round_trippers.go:580]     Content-Length: 263
	I0731 23:33:03.711158   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.711214   12704 round_trippers.go:580]     Audit-Id: 11f25d20-912f-4b35-88fd-dbce19ccf211
	I0731 23:33:03.711214   12704 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0731 23:33:03.711327   12704 api_server.go:141] control plane version: v1.30.3
	I0731 23:33:03.711399   12704 api_server.go:131] duration metric: took 11.3822ms to wait for apiserver health ...
	I0731 23:33:03.711440   12704 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:33:03.853804   12704 request.go:629] Waited for 142.2292ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods
	I0731 23:33:03.853966   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods
	I0731 23:33:03.853991   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:03.853991   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:03.854055   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:03.859351   12704 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:33:03.859351   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:03.859351   12704 round_trippers.go:580]     Audit-Id: 8cfdaa21-27ae-47d5-87b1-3e7fba5e19c1
	I0731 23:33:03.859351   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:03.859351   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:03.859351   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:03.859351   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:03.859727   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:03 GMT
	I0731 23:33:03.862196   12704 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"427","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0731 23:33:03.864401   12704 system_pods.go:59] 8 kube-system pods found
	I0731 23:33:03.864493   12704 system_pods.go:61] "coredns-7db6d8ff4d-z8gtw" [41ddb3a7-8405-49e7-88fb-41ab6278e4af] Running
	I0731 23:33:03.864493   12704 system_pods.go:61] "etcd-multinode-411400" [d1476f05-7d77-424f-b5b3-c4c29f539af6] Running
	I0731 23:33:03.864493   12704 system_pods.go:61] "kindnet-j8slc" [d77d4517-d9d3-46d9-a231-1496684afe1d] Running
	I0731 23:33:03.864493   12704 system_pods.go:61] "kube-apiserver-multinode-411400" [fd9ca41e-c7ca-416e-b00e-b6cf76e4c434] Running
	I0731 23:33:03.864493   12704 system_pods.go:61] "kube-controller-manager-multinode-411400" [217a4087-49b2-4b74-a094-e027a51cf503] Running
	I0731 23:33:03.864493   12704 system_pods.go:61] "kube-proxy-chdxg" [f3405391-f4cb-4ffe-8d51-d669e37d0a3b] Running
	I0731 23:33:03.864493   12704 system_pods.go:61] "kube-scheduler-multinode-411400" [a10cf66c-3049-48d4-9ab1-8667efc59977] Running
	I0731 23:33:03.864493   12704 system_pods.go:61] "storage-provisioner" [f33ea8e6-6b88-471e-a471-d3c4faf9de93] Running
	I0731 23:33:03.864493   12704 system_pods.go:74] duration metric: took 153.0512ms to wait for pod list to return data ...
	I0731 23:33:03.864493   12704 default_sa.go:34] waiting for default service account to be created ...
	I0731 23:33:04.056922   12704 request.go:629] Waited for 192.145ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/namespaces/default/serviceaccounts
	I0731 23:33:04.056922   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/default/serviceaccounts
	I0731 23:33:04.057220   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:04.057220   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:04.057220   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:04.060683   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:33:04.060683   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:04.060683   12704 round_trippers.go:580]     Content-Length: 261
	I0731 23:33:04.060683   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:04 GMT
	I0731 23:33:04.060683   12704 round_trippers.go:580]     Audit-Id: 6c3002df-8633-4ab5-9786-61de17471912
	I0731 23:33:04.060798   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:04.060798   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:04.060798   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:04.060798   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:04.060846   12704 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"16d02427-a81b-4fff-a90d-597cdeb70239","resourceVersion":"315","creationTimestamp":"2024-07-31T23:32:40Z"}}]}
	I0731 23:33:04.060957   12704 default_sa.go:45] found service account: "default"
	I0731 23:33:04.060957   12704 default_sa.go:55] duration metric: took 196.3459ms for default service account to be created ...
	I0731 23:33:04.060957   12704 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 23:33:04.245724   12704 request.go:629] Waited for 184.5292ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods
	I0731 23:33:04.245912   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods
	I0731 23:33:04.245912   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:04.245912   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:04.245912   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:04.254761   12704 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 23:33:04.254761   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:04.254761   12704 round_trippers.go:580]     Audit-Id: f41bbd80-44fc-48a7-bce2-cf0e7cb0b01d
	I0731 23:33:04.254761   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:04.254761   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:04.254761   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:04.254761   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:04.254761   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:04 GMT
	I0731 23:33:04.255513   12704 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"427","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0731 23:33:04.259066   12704 system_pods.go:86] 8 kube-system pods found
	I0731 23:33:04.259066   12704 system_pods.go:89] "coredns-7db6d8ff4d-z8gtw" [41ddb3a7-8405-49e7-88fb-41ab6278e4af] Running
	I0731 23:33:04.259066   12704 system_pods.go:89] "etcd-multinode-411400" [d1476f05-7d77-424f-b5b3-c4c29f539af6] Running
	I0731 23:33:04.259144   12704 system_pods.go:89] "kindnet-j8slc" [d77d4517-d9d3-46d9-a231-1496684afe1d] Running
	I0731 23:33:04.259144   12704 system_pods.go:89] "kube-apiserver-multinode-411400" [fd9ca41e-c7ca-416e-b00e-b6cf76e4c434] Running
	I0731 23:33:04.259144   12704 system_pods.go:89] "kube-controller-manager-multinode-411400" [217a4087-49b2-4b74-a094-e027a51cf503] Running
	I0731 23:33:04.259144   12704 system_pods.go:89] "kube-proxy-chdxg" [f3405391-f4cb-4ffe-8d51-d669e37d0a3b] Running
	I0731 23:33:04.259144   12704 system_pods.go:89] "kube-scheduler-multinode-411400" [a10cf66c-3049-48d4-9ab1-8667efc59977] Running
	I0731 23:33:04.259144   12704 system_pods.go:89] "storage-provisioner" [f33ea8e6-6b88-471e-a471-d3c4faf9de93] Running
	I0731 23:33:04.259197   12704 system_pods.go:126] duration metric: took 198.2368ms to wait for k8s-apps to be running ...
	I0731 23:33:04.259197   12704 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:33:04.271020   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:33:04.296005   12704 system_svc.go:56] duration metric: took 36.8083ms WaitForService to wait for kubelet
	I0731 23:33:04.296053   12704 kubeadm.go:582] duration metric: took 23.4165753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:33:04.296150   12704 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:33:04.447852   12704 request.go:629] Waited for 151.6604ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/nodes
	I0731 23:33:04.447852   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes
	I0731 23:33:04.447852   12704 round_trippers.go:469] Request Headers:
	I0731 23:33:04.447852   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:33:04.447852   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:33:04.454586   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:33:04.454586   12704 round_trippers.go:577] Response Headers:
	I0731 23:33:04.454586   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:33:04.454586   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:33:04 GMT
	I0731 23:33:04.454586   12704 round_trippers.go:580]     Audit-Id: 0225e45c-1a39-4bbc-821b-ab36b2514e7e
	I0731 23:33:04.454586   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:33:04.454586   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:33:04.454586   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:33:04.455308   12704 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0731 23:33:04.455462   12704 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:33:04.455462   12704 node_conditions.go:123] node cpu capacity is 2
	I0731 23:33:04.455462   12704 node_conditions.go:105] duration metric: took 159.3102ms to run NodePressure ...
	I0731 23:33:04.455462   12704 start.go:241] waiting for startup goroutines ...
	I0731 23:33:04.455462   12704 start.go:246] waiting for cluster config update ...
	I0731 23:33:04.455462   12704 start.go:255] writing updated cluster config ...
	I0731 23:33:04.460913   12704 out.go:177] 
	I0731 23:33:04.464026   12704 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:33:04.471900   12704 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:33:04.473151   12704 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:33:04.479438   12704 out.go:177] * Starting "multinode-411400-m02" worker node in "multinode-411400" cluster
	I0731 23:33:04.481689   12704 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:33:04.482365   12704 cache.go:56] Caching tarball of preloaded images
	I0731 23:33:04.482563   12704 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 23:33:04.482563   12704 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 23:33:04.482563   12704 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:33:04.486463   12704 start.go:360] acquireMachinesLock for multinode-411400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:33:04.486643   12704 start.go:364] duration metric: took 95.6µs to acquireMachinesLock for "multinode-411400-m02"
	I0731 23:33:04.486915   12704 start.go:93] Provisioning new machine with config: &{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:33:04.487145   12704 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0731 23:33:04.490758   12704 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 23:33:04.491808   12704 start.go:159] libmachine.API.Create for "multinode-411400" (driver="hyperv")
	I0731 23:33:04.491808   12704 client.go:168] LocalClient.Create starting
	I0731 23:33:04.491808   12704 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0731 23:33:04.492509   12704 main.go:141] libmachine: Decoding PEM data...
	I0731 23:33:04.492580   12704 main.go:141] libmachine: Parsing certificate...
	I0731 23:33:04.492888   12704 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0731 23:33:04.493138   12704 main.go:141] libmachine: Decoding PEM data...
	I0731 23:33:04.493138   12704 main.go:141] libmachine: Parsing certificate...
	I0731 23:33:04.493138   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0731 23:33:06.346996   12704 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0731 23:33:06.346996   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:06.347876   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0731 23:33:08.031407   12704 main.go:141] libmachine: [stdout =====>] : False
	
	I0731 23:33:08.031456   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:08.031456   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 23:33:09.482784   12704 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 23:33:09.482784   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:09.483420   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 23:33:13.133942   12704 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 23:33:13.134515   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:13.136801   12704 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 23:33:13.628100   12704 main.go:141] libmachine: Creating SSH key...
	I0731 23:33:14.107018   12704 main.go:141] libmachine: Creating VM...
	I0731 23:33:14.107018   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0731 23:33:16.973173   12704 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0731 23:33:16.973173   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:16.973713   12704 main.go:141] libmachine: Using switch "Default Switch"
	I0731 23:33:16.973842   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0731 23:33:18.729614   12704 main.go:141] libmachine: [stdout =====>] : True
	
	I0731 23:33:18.730011   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:18.730011   12704 main.go:141] libmachine: Creating VHD
	I0731 23:33:18.730011   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0731 23:33:22.431615   12704 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FCA4D3A0-C99B-4E70-9D11-A8A2D617634E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0731 23:33:22.432417   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:22.432417   12704 main.go:141] libmachine: Writing magic tar header
	I0731 23:33:22.432417   12704 main.go:141] libmachine: Writing SSH key tar header
	I0731 23:33:22.442337   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0731 23:33:25.585302   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:25.586343   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:25.586392   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\disk.vhd' -SizeBytes 20000MB
	I0731 23:33:28.140079   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:28.140079   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:28.140180   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-411400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0731 23:33:31.708012   12704 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-411400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0731 23:33:31.708012   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:31.708123   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-411400-m02 -DynamicMemoryEnabled $false
	I0731 23:33:33.950038   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:33.950124   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:33.950124   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-411400-m02 -Count 2
	I0731 23:33:36.101456   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:36.101640   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:36.101774   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-411400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\boot2docker.iso'
	I0731 23:33:38.641438   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:38.641438   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:38.641617   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-411400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\disk.vhd'
	I0731 23:33:41.231388   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:41.231388   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:41.231714   12704 main.go:141] libmachine: Starting VM...
	I0731 23:33:41.231714   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-411400-m02
	I0731 23:33:44.326112   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:44.326112   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:44.326112   12704 main.go:141] libmachine: Waiting for host to start...
	I0731 23:33:44.326112   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:33:46.592195   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:33:46.592265   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:46.592286   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:33:49.078241   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:49.078340   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:50.084041   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:33:52.293372   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:33:52.293434   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:52.293434   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:33:54.819230   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:33:54.819230   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:55.825505   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:33:58.015886   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:33:58.015957   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:33:58.015957   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:00.508899   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:34:00.508899   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:01.515984   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:03.694612   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:03.694693   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:03.694693   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:06.167968   12704 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:34:06.167968   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:07.174993   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:09.392575   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:09.392575   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:09.392575   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:11.970766   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:11.970766   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:11.970944   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:14.136616   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:14.136616   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:14.136616   12704 machine.go:94] provisionDockerMachine start ...
	I0731 23:34:14.136956   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:16.257780   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:16.257780   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:16.258062   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:18.807601   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:18.807601   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:18.812748   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:18.823475   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.42 22 <nil> <nil>}
	I0731 23:34:18.823475   12704 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:34:18.939406   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 23:34:18.939515   12704 buildroot.go:166] provisioning hostname "multinode-411400-m02"
	I0731 23:34:18.939515   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:21.023668   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:21.023668   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:21.024538   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:23.557203   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:23.557203   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:23.562683   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:23.563247   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.42 22 <nil> <nil>}
	I0731 23:34:23.563247   12704 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-411400-m02 && echo "multinode-411400-m02" | sudo tee /etc/hostname
	I0731 23:34:23.703984   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-411400-m02
	
	I0731 23:34:23.704028   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:25.820316   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:25.820316   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:25.820463   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:28.352748   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:28.352748   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:28.357788   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:28.358660   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.42 22 <nil> <nil>}
	I0731 23:34:28.358758   12704 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-411400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-411400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-411400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:34:28.495039   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:34:28.495131   12704 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 23:34:28.495131   12704 buildroot.go:174] setting up certificates
	I0731 23:34:28.495131   12704 provision.go:84] configureAuth start
	I0731 23:34:28.495221   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:30.592098   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:30.592326   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:30.592326   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:33.088477   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:33.088477   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:33.089181   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:35.193608   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:35.193608   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:35.194577   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:37.685855   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:37.686319   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:37.686319   12704 provision.go:143] copyHostCerts
	I0731 23:34:37.686384   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 23:34:37.686384   12704 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 23:34:37.686384   12704 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 23:34:37.687134   12704 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 23:34:37.688269   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 23:34:37.688618   12704 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 23:34:37.688687   12704 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 23:34:37.689027   12704 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 23:34:37.689885   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 23:34:37.689885   12704 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 23:34:37.689885   12704 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 23:34:37.690588   12704 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 23:34:37.691859   12704 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-411400-m02 san=[127.0.0.1 172.17.28.42 localhost minikube multinode-411400-m02]
	I0731 23:34:37.872950   12704 provision.go:177] copyRemoteCerts
	I0731 23:34:37.884626   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:34:37.884722   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:39.992239   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:39.992239   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:39.992317   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:42.476063   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:42.476154   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:42.476692   12704 sshutil.go:53] new ssh client: &{IP:172.17.28.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:34:42.573735   12704 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6889544s)
	I0731 23:34:42.573735   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 23:34:42.573735   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 23:34:42.615388   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 23:34:42.616434   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0731 23:34:42.658942   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 23:34:42.659309   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 23:34:42.704068   12704 provision.go:87] duration metric: took 14.2087575s to configureAuth
	I0731 23:34:42.704143   12704 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:34:42.704766   12704 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:34:42.704907   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:44.816440   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:44.816773   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:44.816852   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:47.291415   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:47.291415   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:47.296720   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:47.297664   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.42 22 <nil> <nil>}
	I0731 23:34:47.297664   12704 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 23:34:47.429277   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 23:34:47.429353   12704 buildroot.go:70] root file system type: tmpfs
	I0731 23:34:47.429476   12704 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 23:34:47.429558   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:49.567300   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:49.567521   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:49.567521   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:52.089130   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:52.089190   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:52.094295   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:52.094807   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.42 22 <nil> <nil>}
	I0731 23:34:52.094807   12704 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.20.56"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 23:34:52.235732   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.20.56
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 23:34:52.235867   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:34:54.315635   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:34:54.315911   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:54.315911   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:34:56.805955   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:34:56.806378   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:34:56.811909   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:34:56.812752   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.42 22 <nil> <nil>}
	I0731 23:34:56.812752   12704 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 23:34:59.069964   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 23:34:59.069964   12704 machine.go:97] duration metric: took 44.932782s to provisionDockerMachine
	I0731 23:34:59.070077   12704 client.go:171] duration metric: took 1m54.5768258s to LocalClient.Create
	I0731 23:34:59.070077   12704 start.go:167] duration metric: took 1m54.5768258s to libmachine.API.Create "multinode-411400"
	I0731 23:34:59.070137   12704 start.go:293] postStartSetup for "multinode-411400-m02" (driver="hyperv")
	I0731 23:34:59.070137   12704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:34:59.083201   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:34:59.083201   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:35:01.209647   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:01.209647   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:01.210086   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:35:03.714690   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:35:03.714690   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:03.715740   12704 sshutil.go:53] new ssh client: &{IP:172.17.28.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:35:03.822405   12704 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7391442s)
	I0731 23:35:03.834281   12704 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:35:03.840189   12704 command_runner.go:130] > NAME=Buildroot
	I0731 23:35:03.840189   12704 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 23:35:03.840189   12704 command_runner.go:130] > ID=buildroot
	I0731 23:35:03.840189   12704 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 23:35:03.840189   12704 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 23:35:03.840189   12704 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:35:03.840189   12704 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 23:35:03.840189   12704 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 23:35:03.841881   12704 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 23:35:03.841881   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 23:35:03.853904   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:35:03.870505   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 23:35:03.917451   12704 start.go:296] duration metric: took 4.8472528s for postStartSetup
	I0731 23:35:03.920316   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:35:06.033755   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:06.034424   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:06.034520   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:35:08.544294   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:35:08.545199   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:08.545564   12704 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:35:08.548879   12704 start.go:128] duration metric: took 2m4.0601708s to createHost
	I0731 23:35:08.548970   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:35:10.647605   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:10.647605   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:10.647730   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:35:13.150924   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:35:13.150924   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:13.156491   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:35:13.157182   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.42 22 <nil> <nil>}
	I0731 23:35:13.157737   12704 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:35:13.286976   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722468913.307587751
	
	I0731 23:35:13.287059   12704 fix.go:216] guest clock: 1722468913.307587751
	I0731 23:35:13.287059   12704 fix.go:229] Guest: 2024-07-31 23:35:13.307587751 +0000 UTC Remote: 2024-07-31 23:35:08.5489709 +0000 UTC m=+343.955175301 (delta=4.758616851s)
	I0731 23:35:13.287157   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:35:15.424012   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:15.424012   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:15.424284   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:35:17.958068   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:35:17.958068   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:17.964133   12704 main.go:141] libmachine: Using SSH client type: native
	I0731 23:35:17.964291   12704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.28.42 22 <nil> <nil>}
	I0731 23:35:17.964291   12704 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722468913
	I0731 23:35:18.107788   12704 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 23:35:13 UTC 2024
	
	I0731 23:35:18.107788   12704 fix.go:236] clock set: Wed Jul 31 23:35:13 UTC 2024
	 (err=<nil>)
	I0731 23:35:18.107788   12704 start.go:83] releasing machines lock for "multinode-411400-m02", held for 2m13.6194041s
	I0731 23:35:18.107788   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:35:20.216190   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:20.216190   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:20.216429   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:35:22.765791   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:35:22.765791   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:22.770466   12704 out.go:177] * Found network options:
	I0731 23:35:22.776436   12704 out.go:177]   - NO_PROXY=172.17.20.56
	W0731 23:35:22.778531   12704 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 23:35:22.781248   12704 out.go:177]   - NO_PROXY=172.17.20.56
	W0731 23:35:22.784293   12704 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 23:35:22.785673   12704 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 23:35:22.788635   12704 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 23:35:22.788635   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:35:22.798025   12704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:35:22.798025   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:35:24.984558   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:24.985553   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:24.985553   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:24.985553   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:24.985553   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:35:24.985783   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:35:27.655596   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:35:27.655596   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:27.656002   12704 sshutil.go:53] new ssh client: &{IP:172.17.28.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:35:27.677763   12704 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:35:27.677939   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:27.678326   12704 sshutil.go:53] new ssh client: &{IP:172.17.28.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:35:27.750323   12704 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0731 23:35:27.750810   12704 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9527221s)
	W0731 23:35:27.750810   12704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:35:27.761012   12704 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0731 23:35:27.762261   12704 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9735634s)
	W0731 23:35:27.762261   12704 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 23:35:27.762477   12704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:35:27.791707   12704 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0731 23:35:27.792397   12704 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:35:27.792397   12704 start.go:495] detecting cgroup driver to use...
	I0731 23:35:27.792716   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:35:27.826959   12704 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0731 23:35:27.840400   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0731 23:35:27.847773   12704 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 23:35:27.847773   12704 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 23:35:27.874453   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 23:35:27.894870   12704 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 23:35:27.905637   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 23:35:27.937021   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:35:27.966121   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 23:35:27.996129   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:35:28.028464   12704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:35:28.062269   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 23:35:28.095952   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 23:35:28.126648   12704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 23:35:28.156865   12704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:35:28.173246   12704 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 23:35:28.185787   12704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:35:28.214841   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:35:28.406262   12704 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 23:35:28.438551   12704 start.go:495] detecting cgroup driver to use...
	I0731 23:35:28.449977   12704 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 23:35:28.472906   12704 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0731 23:35:28.472906   12704 command_runner.go:130] > [Unit]
	I0731 23:35:28.472906   12704 command_runner.go:130] > Description=Docker Application Container Engine
	I0731 23:35:28.472906   12704 command_runner.go:130] > Documentation=https://docs.docker.com
	I0731 23:35:28.472906   12704 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0731 23:35:28.472906   12704 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0731 23:35:28.472906   12704 command_runner.go:130] > StartLimitBurst=3
	I0731 23:35:28.472906   12704 command_runner.go:130] > StartLimitIntervalSec=60
	I0731 23:35:28.472906   12704 command_runner.go:130] > [Service]
	I0731 23:35:28.472906   12704 command_runner.go:130] > Type=notify
	I0731 23:35:28.472906   12704 command_runner.go:130] > Restart=on-failure
	I0731 23:35:28.472906   12704 command_runner.go:130] > Environment=NO_PROXY=172.17.20.56
	I0731 23:35:28.472906   12704 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0731 23:35:28.472906   12704 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0731 23:35:28.472906   12704 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0731 23:35:28.472906   12704 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0731 23:35:28.472906   12704 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0731 23:35:28.472906   12704 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0731 23:35:28.472906   12704 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0731 23:35:28.472906   12704 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0731 23:35:28.472906   12704 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0731 23:35:28.472906   12704 command_runner.go:130] > ExecStart=
	I0731 23:35:28.472906   12704 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0731 23:35:28.472906   12704 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0731 23:35:28.472906   12704 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0731 23:35:28.472906   12704 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0731 23:35:28.472906   12704 command_runner.go:130] > LimitNOFILE=infinity
	I0731 23:35:28.472906   12704 command_runner.go:130] > LimitNPROC=infinity
	I0731 23:35:28.472906   12704 command_runner.go:130] > LimitCORE=infinity
	I0731 23:35:28.472906   12704 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0731 23:35:28.472906   12704 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0731 23:35:28.473537   12704 command_runner.go:130] > TasksMax=infinity
	I0731 23:35:28.473537   12704 command_runner.go:130] > TimeoutStartSec=0
	I0731 23:35:28.473537   12704 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0731 23:35:28.473537   12704 command_runner.go:130] > Delegate=yes
	I0731 23:35:28.473537   12704 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0731 23:35:28.473537   12704 command_runner.go:130] > KillMode=process
	I0731 23:35:28.473537   12704 command_runner.go:130] > [Install]
	I0731 23:35:28.473537   12704 command_runner.go:130] > WantedBy=multi-user.target
	I0731 23:35:28.486545   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:35:28.521984   12704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:35:28.565789   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:35:28.598165   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:35:28.633860   12704 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 23:35:28.696525   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:35:28.720841   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:35:28.753282   12704 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0731 23:35:28.766773   12704 ssh_runner.go:195] Run: which cri-dockerd
	I0731 23:35:28.772188   12704 command_runner.go:130] > /usr/bin/cri-dockerd
	I0731 23:35:28.783545   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 23:35:28.800302   12704 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 23:35:28.840993   12704 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 23:35:29.047036   12704 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 23:35:29.239199   12704 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 23:35:29.239290   12704 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 23:35:29.284076   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:35:29.472891   12704 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 23:35:32.025663   12704 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5526648s)
	I0731 23:35:32.038118   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 23:35:32.073813   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:35:32.110417   12704 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 23:35:32.296551   12704 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 23:35:32.490849   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:35:32.681703   12704 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 23:35:32.724555   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:35:32.757124   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:35:32.945334   12704 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 23:35:33.051348   12704 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 23:35:33.063289   12704 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 23:35:33.072137   12704 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0731 23:35:33.072137   12704 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 23:35:33.072137   12704 command_runner.go:130] > Device: 0,22	Inode: 878         Links: 1
	I0731 23:35:33.072137   12704 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0731 23:35:33.072137   12704 command_runner.go:130] > Access: 2024-07-31 23:35:32.990856516 +0000
	I0731 23:35:33.072137   12704 command_runner.go:130] > Modify: 2024-07-31 23:35:32.990856516 +0000
	I0731 23:35:33.072137   12704 command_runner.go:130] > Change: 2024-07-31 23:35:32.994856544 +0000
	I0731 23:35:33.072388   12704 command_runner.go:130] >  Birth: -
	I0731 23:35:33.072388   12704 start.go:563] Will wait 60s for crictl version
	I0731 23:35:33.086564   12704 ssh_runner.go:195] Run: which crictl
	I0731 23:35:33.092691   12704 command_runner.go:130] > /usr/bin/crictl
	I0731 23:35:33.103330   12704 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:35:33.158976   12704 command_runner.go:130] > Version:  0.1.0
	I0731 23:35:33.158976   12704 command_runner.go:130] > RuntimeName:  docker
	I0731 23:35:33.158976   12704 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0731 23:35:33.160065   12704 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 23:35:33.160065   12704 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 23:35:33.169231   12704 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:35:33.206389   12704 command_runner.go:130] > 27.1.1
	I0731 23:35:33.216683   12704 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:35:33.247143   12704 command_runner.go:130] > 27.1.1
	I0731 23:35:33.251480   12704 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 23:35:33.254173   12704 out.go:177]   - env NO_PROXY=172.17.20.56
	I0731 23:35:33.256539   12704 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 23:35:33.259968   12704 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 23:35:33.259968   12704 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 23:35:33.259968   12704 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 23:35:33.259968   12704 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 23:35:33.262320   12704 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 23:35:33.262320   12704 ip.go:210] interface addr: 172.17.16.1/20
	I0731 23:35:33.274026   12704 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 23:35:33.279622   12704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:35:33.299455   12704 mustload.go:65] Loading cluster: multinode-411400
	I0731 23:35:33.300334   12704 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:35:33.300652   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:35:35.380968   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:35.380968   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:35.381708   12704 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:35:35.382339   12704 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400 for IP: 172.17.28.42
	I0731 23:35:35.382339   12704 certs.go:194] generating shared ca certs ...
	I0731 23:35:35.382434   12704 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:35:35.383071   12704 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 23:35:35.383435   12704 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 23:35:35.383750   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 23:35:35.383975   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 23:35:35.384125   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 23:35:35.384297   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 23:35:35.384422   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 23:35:35.384971   12704 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 23:35:35.385171   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 23:35:35.385202   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 23:35:35.385202   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 23:35:35.385919   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 23:35:35.386491   12704 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 23:35:35.386842   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 23:35:35.387043   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:35:35.387148   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 23:35:35.387640   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:35:35.435334   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:35:35.481277   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:35:35.527812   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 23:35:35.575263   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 23:35:35.618960   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:35:35.662722   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 23:35:35.721657   12704 ssh_runner.go:195] Run: openssl version
	I0731 23:35:35.730245   12704 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 23:35:35.742250   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 23:35:35.771959   12704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 23:35:35.779622   12704 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:35:35.779890   12704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:35:35.790903   12704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 23:35:35.797884   12704 command_runner.go:130] > 51391683
	I0731 23:35:35.810380   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 23:35:35.842791   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 23:35:35.872852   12704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 23:35:35.879207   12704 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:35:35.879357   12704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:35:35.890586   12704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 23:35:35.899174   12704 command_runner.go:130] > 3ec20f2e
	I0731 23:35:35.919383   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:35:35.951983   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:35:35.982160   12704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:35:35.989379   12704 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:35:35.989494   12704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:35:36.000490   12704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:35:36.007987   12704 command_runner.go:130] > b5213941
	I0731 23:35:36.018751   12704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:35:36.048997   12704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:35:36.056080   12704 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:35:36.056774   12704 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:35:36.057174   12704 kubeadm.go:934] updating node {m02 172.17.28.42 8443 v1.30.3 docker false true} ...
	I0731 23:35:36.057330   12704 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-411400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.28.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:35:36.070010   12704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:35:36.096101   12704 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	I0731 23:35:36.097222   12704 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 23:35:36.109201   12704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 23:35:36.127691   12704 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 23:35:36.127691   12704 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 23:35:36.127747   12704 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 23:35:36.127898   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 23:35:36.127947   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 23:35:36.142721   12704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 23:35:36.143804   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:35:36.144810   12704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 23:35:36.148704   12704 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 23:35:36.149726   12704 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 23:35:36.149857   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 23:35:36.183062   12704 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 23:35:36.184341   12704 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 23:35:36.197580   12704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 23:35:36.198880   12704 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 23:35:36.198880   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 23:35:36.248816   12704 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 23:35:36.249240   12704 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 23:35:36.249352   12704 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 23:35:37.596482   12704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0731 23:35:37.613850   12704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0731 23:35:37.643597   12704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:35:37.688503   12704 ssh_runner.go:195] Run: grep 172.17.20.56	control-plane.minikube.internal$ /etc/hosts
	I0731 23:35:37.694695   12704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.20.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:35:37.728864   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:35:37.930050   12704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:35:37.959904   12704 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:35:37.960812   12704 start.go:317] joinCluster: &{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:35:37.961032   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 23:35:37.961032   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:35:40.062193   12704 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:35:40.062193   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:40.062193   12704 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:35:42.573392   12704 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:35:42.574102   12704 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:35:42.574477   12704 sshutil.go:53] new ssh client: &{IP:172.17.20.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:35:42.771626   12704 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token b7qcz0.snxlif30946hyunx --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf 
	I0731 23:35:42.771716   12704 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8106243s)
	I0731 23:35:42.771716   12704 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:35:42.771716   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b7qcz0.snxlif30946hyunx --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-411400-m02"
	I0731 23:35:42.982155   12704 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:35:44.283593   12704 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 23:35:44.283593   12704 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0731 23:35:44.283593   12704 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0731 23:35:44.283593   12704 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:35:44.283593   12704 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:35:44.283720   12704 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 23:35:44.283720   12704 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 23:35:44.283794   12704 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.094768ms
	I0731 23:35:44.283794   12704 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0731 23:35:44.283794   12704 command_runner.go:130] > This node has joined the cluster:
	I0731 23:35:44.283794   12704 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0731 23:35:44.283833   12704 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0731 23:35:44.283833   12704 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0731 23:35:44.283833   12704 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b7qcz0.snxlif30946hyunx --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-411400-m02": (1.5120981s)
	I0731 23:35:44.283929   12704 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 23:35:44.484016   12704 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0731 23:35:44.675379   12704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-411400-m02 minikube.k8s.io/updated_at=2024_07_31T23_35_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=multinode-411400 minikube.k8s.io/primary=false
	I0731 23:35:44.825593   12704 command_runner.go:130] > node/multinode-411400-m02 labeled
	I0731 23:35:44.829787   12704 start.go:319] duration metric: took 6.8688888s to joinCluster
	I0731 23:35:44.830117   12704 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:35:44.830834   12704 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:35:44.832975   12704 out.go:177] * Verifying Kubernetes components...
	I0731 23:35:44.848317   12704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:35:45.090717   12704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:35:45.120243   12704 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:35:45.120985   12704 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.20.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:35:45.121666   12704 node_ready.go:35] waiting up to 6m0s for node "multinode-411400-m02" to be "Ready" ...
	I0731 23:35:45.122360   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:45.122360   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:45.122360   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:45.122360   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:45.134622   12704 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 23:35:45.134622   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:45.134622   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:45 GMT
	I0731 23:35:45.134622   12704 round_trippers.go:580]     Audit-Id: 60a58ad0-0b1a-4fe1-939c-07b444760c53
	I0731 23:35:45.134622   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:45.134622   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:45.134622   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:45.134622   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:45.134622   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:45.134622   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:45.629800   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:45.629800   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:45.629800   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:45.629800   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:45.633807   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:45.633807   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:45.633807   12704 round_trippers.go:580]     Audit-Id: c7534fb8-91bf-4b70-a2a2-74e141f3ab77
	I0731 23:35:45.633807   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:45.633807   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:45.633807   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:45.633807   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:45.633807   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:45.633807   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:45 GMT
	I0731 23:35:45.633807   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:46.129338   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:46.129472   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:46.129472   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:46.129472   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:46.134075   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:35:46.134075   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:46.134075   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:46.134075   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:46.134075   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:46.134480   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:46.134480   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:46.134480   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:46 GMT
	I0731 23:35:46.134480   12704 round_trippers.go:580]     Audit-Id: 58b39c23-a9ac-4e53-a0cf-65fb8429a79b
	I0731 23:35:46.134660   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:46.629256   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:46.629256   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:46.629256   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:46.629256   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:46.631898   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:35:46.631898   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:46.631898   12704 round_trippers.go:580]     Audit-Id: b2005e95-e776-426d-9810-dcde63f6c9b4
	I0731 23:35:46.632757   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:46.632757   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:46.632757   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:46.632757   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:46.632757   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:46.632757   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:46 GMT
	I0731 23:35:46.632898   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:47.127945   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:47.128046   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:47.128046   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:47.128328   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:47.132757   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:35:47.132757   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:47.132757   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:47.132757   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:47 GMT
	I0731 23:35:47.132757   12704 round_trippers.go:580]     Audit-Id: 8c81c151-6d1c-43c0-8f4b-77c6ae453ad6
	I0731 23:35:47.132757   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:47.132757   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:47.132757   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:47.133551   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:47.133690   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:47.133690   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:35:47.624199   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:47.624403   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:47.624403   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:47.624403   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:47.627870   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:47.628061   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:47.628061   12704 round_trippers.go:580]     Audit-Id: c1e66240-5f31-4717-b31a-4074544ec336
	I0731 23:35:47.628061   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:47.628061   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:47.628061   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:47.628061   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:47.628061   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:47.628061   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:47 GMT
	I0731 23:35:47.628174   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:48.129527   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:48.129743   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:48.129743   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:48.129829   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:48.133000   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:48.133666   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:48.133666   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:48.133666   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:48.133666   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:48.133666   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:48.133666   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:48 GMT
	I0731 23:35:48.133666   12704 round_trippers.go:580]     Audit-Id: 01e01d81-822e-4785-991b-b4ef7639487a
	I0731 23:35:48.133666   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:48.133818   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:48.631971   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:48.631971   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:48.631971   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:48.631971   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:48.635788   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:48.635829   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:48.635829   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:48.635829   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:48.635829   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:48.635829   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:48.635829   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:48.635829   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:48 GMT
	I0731 23:35:48.635829   12704 round_trippers.go:580]     Audit-Id: b825b841-06cb-4e0c-9f85-38ceb290fa96
	I0731 23:35:48.636105   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:49.132922   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:49.132922   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:49.132922   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:49.132922   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:49.136906   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:49.136906   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:49.136906   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:49.136906   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:49.136906   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:49.136906   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:49 GMT
	I0731 23:35:49.137003   12704 round_trippers.go:580]     Audit-Id: a045669e-3a50-4090-9f1f-21221ee7dfb5
	I0731 23:35:49.137003   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:49.137003   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:49.137143   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:49.137601   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:35:49.637143   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:49.637212   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:49.637212   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:49.637266   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:49.639762   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:35:49.639762   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:49.639762   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:49.639762   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:49.639762   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:49.639762   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:49 GMT
	I0731 23:35:49.639762   12704 round_trippers.go:580]     Audit-Id: edee6305-4ed7-44e1-b1fe-b70de446e8b6
	I0731 23:35:49.639762   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:49.639762   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:49.639762   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:50.128242   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:50.128308   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:50.128308   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:50.128308   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:50.139665   12704 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0731 23:35:50.139916   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:50.139916   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:50.139916   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:50.139916   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:50.139916   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:50.139916   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:50 GMT
	I0731 23:35:50.139916   12704 round_trippers.go:580]     Audit-Id: da5d6ffd-a288-420b-b6b4-1b5842f48b96
	I0731 23:35:50.139916   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:50.140127   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:50.632220   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:50.632283   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:50.632359   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:50.632359   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:50.638097   12704 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:35:50.638097   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:50.638097   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:50.638097   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:50.638097   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:50.638097   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:50.638097   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:50.638097   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:50 GMT
	I0731 23:35:50.638097   12704 round_trippers.go:580]     Audit-Id: 1c9cc3d0-041b-43ae-a849-9220ce84e474
	I0731 23:35:50.638097   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:51.136282   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:51.136282   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:51.136282   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:51.136373   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:51.140293   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:51.140293   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:51.140293   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:51.140293   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:51.140293   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:51.140293   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:51.140293   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:51 GMT
	I0731 23:35:51.140293   12704 round_trippers.go:580]     Audit-Id: df01b0b3-b8fa-4be4-a408-de466694ce99
	I0731 23:35:51.140293   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:51.140293   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:51.140293   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:35:51.626797   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:51.626797   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:51.626797   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:51.626797   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:51.630168   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:51.630168   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:51.630168   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:51.630168   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:51 GMT
	I0731 23:35:51.630168   12704 round_trippers.go:580]     Audit-Id: 960ba239-7983-4001-a3e0-4a75c821dc12
	I0731 23:35:51.630168   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:51.630168   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:51.630168   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:51.630168   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:51.630168   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:52.133795   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:52.133857   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:52.133857   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:52.133857   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:52.137668   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:52.137729   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:52.137729   12704 round_trippers.go:580]     Audit-Id: 5443cd99-519a-4a57-90fa-bf20e318a805
	I0731 23:35:52.137729   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:52.137729   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:52.137729   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:52.137729   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:52.137729   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:52.137729   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:52 GMT
	I0731 23:35:52.137832   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:52.621873   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:52.621873   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:52.621873   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:52.621873   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:52.624676   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:35:52.625584   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:52.625584   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:52.625584   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:52.625584   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:52.625584   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:52.625584   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:52.625584   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:52 GMT
	I0731 23:35:52.625584   12704 round_trippers.go:580]     Audit-Id: f7ad6194-5eab-48b7-8d48-7825a29adf00
	I0731 23:35:52.625816   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:53.130565   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:53.130840   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:53.130840   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:53.130840   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:53.135465   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:35:53.135465   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:53.136074   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:53.136074   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:53.136074   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:53 GMT
	I0731 23:35:53.136074   12704 round_trippers.go:580]     Audit-Id: da03108d-869e-42b5-ac6d-7bf0da165dd1
	I0731 23:35:53.136074   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:53.136074   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:53.136074   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:53.136424   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:53.636642   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:53.636642   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:53.636642   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:53.636642   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:53.639269   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:35:53.640299   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:53.640299   12704 round_trippers.go:580]     Audit-Id: 3edeefe0-9565-47f8-ab34-f1db11ff3bfe
	I0731 23:35:53.640299   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:53.640299   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:53.640366   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:53.640366   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:53.640366   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:53.640366   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:53 GMT
	I0731 23:35:53.640547   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:53.641030   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:35:54.136217   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:54.136415   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:54.136415   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:54.136415   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:54.140050   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:54.140050   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:54.140050   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:54 GMT
	I0731 23:35:54.140050   12704 round_trippers.go:580]     Audit-Id: 204d18f2-93a6-416a-9479-fcae7360dc0c
	I0731 23:35:54.140050   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:54.140050   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:54.140050   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:54.140050   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:54.140050   12704 round_trippers.go:580]     Content-Length: 4028
	I0731 23:35:54.140050   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"592","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3004 chars]
	I0731 23:35:54.622717   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:54.622820   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:54.622820   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:54.622820   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:54.646929   12704 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0731 23:35:54.647575   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:54.647575   12704 round_trippers.go:580]     Audit-Id: 32b3686c-72bb-4a03-800f-d21923dc64d8
	I0731 23:35:54.647575   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:54.647575   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:54.647575   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:54.647575   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:54.647661   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:54 GMT
	I0731 23:35:54.647747   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:55.126935   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:55.126935   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:55.126935   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:55.126935   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:55.131559   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:35:55.133843   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:55.133843   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:55.133843   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:55.133843   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:55.133843   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:55.133924   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:55 GMT
	I0731 23:35:55.133924   12704 round_trippers.go:580]     Audit-Id: 85900aa4-e9a9-42e1-ad24-8c03c61256a1
	I0731 23:35:55.134155   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:55.653336   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:55.653336   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:55.653336   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:55.653336   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:55.661342   12704 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 23:35:55.661342   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:55.661342   12704 round_trippers.go:580]     Audit-Id: d9d32684-6fc0-47a2-80f1-c8da74d4711e
	I0731 23:35:55.661777   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:55.661777   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:55.661777   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:55.661777   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:55.661777   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:55 GMT
	I0731 23:35:55.664502   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:55.664994   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:35:56.123867   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:56.124319   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:56.124319   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:56.124319   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:56.127039   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:35:56.127039   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:56.127039   12704 round_trippers.go:580]     Audit-Id: 17f8a591-d5b9-4029-b89c-f8d635bb85df
	I0731 23:35:56.127039   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:56.127039   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:56.127039   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:56.127039   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:56.127039   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:56 GMT
	I0731 23:35:56.127996   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:56.627885   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:56.627885   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:56.627885   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:56.627885   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:56.631987   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:35:56.632212   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:56.632212   12704 round_trippers.go:580]     Audit-Id: 622e21e8-43ac-43ca-bc22-f239716349fb
	I0731 23:35:56.632212   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:56.632212   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:56.632212   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:56.632212   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:56.632212   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:56 GMT
	I0731 23:35:56.632598   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:57.134130   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:57.134130   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:57.134130   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:57.134130   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:57.137744   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:57.138698   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:57.138698   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:57.138698   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:57 GMT
	I0731 23:35:57.138761   12704 round_trippers.go:580]     Audit-Id: 45b80cd2-c771-463e-86f0-6c7877556b4c
	I0731 23:35:57.138761   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:57.138761   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:57.138828   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:57.139047   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:57.637408   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:57.637553   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:57.637553   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:57.637553   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:57.640864   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:57.641060   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:57.641060   12704 round_trippers.go:580]     Audit-Id: a28938f5-757d-4662-b55a-1091545c6f06
	I0731 23:35:57.641060   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:57.641060   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:57.641060   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:57.641060   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:57.641060   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:57 GMT
	I0731 23:35:57.641276   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:58.124545   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:58.124628   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:58.124628   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:58.124628   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:58.131035   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:35:58.131104   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:58.131104   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:58.131125   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:58.131147   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:58.131171   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:58 GMT
	I0731 23:35:58.131171   12704 round_trippers.go:580]     Audit-Id: 25c80b2f-92c7-459b-8b67-abe670561d61
	I0731 23:35:58.131171   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:58.132391   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:58.132462   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:35:58.628552   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:58.628552   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:58.628552   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:58.628552   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:58.632215   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:58.632278   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:58.632314   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:58.632314   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:58.632314   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:58.632349   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:58 GMT
	I0731 23:35:58.632349   12704 round_trippers.go:580]     Audit-Id: 0ca4d41e-688b-4ebf-aec9-2ce78942f354
	I0731 23:35:58.632349   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:58.632565   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:59.133930   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:59.133930   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:59.133930   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:59.133930   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:59.137564   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:59.137564   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:59.138137   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:59.138137   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:59.138137   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:59.138137   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:59 GMT
	I0731 23:35:59.138137   12704 round_trippers.go:580]     Audit-Id: 5298e157-887b-4173-9e8a-fd7b2d5acd52
	I0731 23:35:59.138137   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:59.140150   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:35:59.636224   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:35:59.636348   12704 round_trippers.go:469] Request Headers:
	I0731 23:35:59.636348   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:35:59.636348   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:35:59.639615   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:35:59.639718   12704 round_trippers.go:577] Response Headers:
	I0731 23:35:59.639718   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:35:59 GMT
	I0731 23:35:59.639718   12704 round_trippers.go:580]     Audit-Id: b4206014-56ff-4e1d-ab2c-c4d6fec80c0a
	I0731 23:35:59.639718   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:35:59.639787   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:35:59.639787   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:35:59.639787   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:35:59.639787   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:00.136684   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:00.136684   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:00.136684   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:00.136684   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:00.142602   12704 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:36:00.142751   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:00.142751   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:00.142751   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:00 GMT
	I0731 23:36:00.142751   12704 round_trippers.go:580]     Audit-Id: d6dc0b7c-4516-4a0e-ae7c-2ddb31b3db7d
	I0731 23:36:00.142751   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:00.142751   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:00.142751   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:00.143122   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:00.143152   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:36:00.639837   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:00.639875   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:00.639875   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:00.639875   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:00.643290   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:00.643290   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:00.643290   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:00.643290   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:00.643290   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:00.643290   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:00 GMT
	I0731 23:36:00.643290   12704 round_trippers.go:580]     Audit-Id: 646ffb5d-68f9-4f33-bcb2-b03730c9234d
	I0731 23:36:00.643290   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:00.643290   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:01.124524   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:01.124639   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:01.124639   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:01.124639   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:01.130861   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:36:01.130861   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:01.131313   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:01 GMT
	I0731 23:36:01.131313   12704 round_trippers.go:580]     Audit-Id: 4930365a-7885-474e-8ed8-9da7297a342f
	I0731 23:36:01.131313   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:01.131313   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:01.131313   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:01.131313   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:01.131747   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:01.623557   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:01.623557   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:01.623659   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:01.623659   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:01.628562   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:36:01.628672   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:01.628672   12704 round_trippers.go:580]     Audit-Id: 94c0fcf1-4aa2-46d1-af68-102a46d57cce
	I0731 23:36:01.628672   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:01.628672   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:01.628755   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:01.628755   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:01.628755   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:01 GMT
	I0731 23:36:01.628836   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:02.125286   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:02.125286   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:02.125362   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:02.125362   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:02.128695   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:02.128695   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:02.128695   12704 round_trippers.go:580]     Audit-Id: 58068f83-e331-4d80-9b57-6c254cd8e5fc
	I0731 23:36:02.128695   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:02.128695   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:02.128695   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:02.129287   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:02.129287   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:02 GMT
	I0731 23:36:02.129557   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:02.626583   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:02.626745   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:02.626786   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:02.626830   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:02.629478   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:02.629478   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:02.629478   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:02.629478   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:02 GMT
	I0731 23:36:02.629478   12704 round_trippers.go:580]     Audit-Id: d15efc95-5ce5-4f22-b545-533a358067b2
	I0731 23:36:02.629478   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:02.629478   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:02.629478   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:02.630214   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:02.630523   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:36:03.127087   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:03.127087   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:03.127087   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:03.127203   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:03.130454   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:03.130853   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:03.130853   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:03 GMT
	I0731 23:36:03.130853   12704 round_trippers.go:580]     Audit-Id: fb2c672e-0f70-489d-bc55-a74b73da6083
	I0731 23:36:03.130853   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:03.130853   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:03.131017   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:03.131017   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:03.131255   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:03.625533   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:03.625533   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:03.625533   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:03.625533   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:03.628179   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:03.628179   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:03.628179   12704 round_trippers.go:580]     Audit-Id: dc73a7c0-5a36-4a3c-8b9f-f84d47ee625c
	I0731 23:36:03.629210   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:03.629210   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:03.629210   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:03.629210   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:03.629245   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:03 GMT
	I0731 23:36:03.629504   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:04.129180   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:04.129544   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:04.129544   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:04.129544   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:04.132854   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:04.132854   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:04.132854   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:04.132854   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:04.132854   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:04 GMT
	I0731 23:36:04.132854   12704 round_trippers.go:580]     Audit-Id: dbb234c0-7668-4367-95be-4cd7ee95e7e0
	I0731 23:36:04.133700   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:04.133700   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:04.133747   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:04.628687   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:04.628800   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:04.628800   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:04.628884   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:04.629686   12704 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 23:36:04.629686   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:04.629686   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:04 GMT
	I0731 23:36:04.629686   12704 round_trippers.go:580]     Audit-Id: 422a4d77-2772-4292-8cdd-dbe100683a5b
	I0731 23:36:04.629686   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:04.629686   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:04.629686   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:04.629686   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:04.635455   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:04.635455   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:36:05.128132   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:05.128132   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:05.128132   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:05.128132   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:05.131718   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:05.131718   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:05.131718   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:05 GMT
	I0731 23:36:05.131718   12704 round_trippers.go:580]     Audit-Id: a595dc0a-7d12-4acc-8286-d0c246a270d2
	I0731 23:36:05.132352   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:05.132352   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:05.132352   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:05.132352   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:05.132437   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:05.626479   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:05.626479   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:05.626479   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:05.626479   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:05.629174   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:05.629174   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:05.629174   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:05.629174   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:05.629739   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:05.629739   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:05.629739   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:05 GMT
	I0731 23:36:05.629739   12704 round_trippers.go:580]     Audit-Id: 0b7046ee-030c-4452-bb99-a6530faee78f
	I0731 23:36:05.630213   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:06.124407   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:06.124407   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:06.124407   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:06.124407   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:06.130287   12704 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:36:06.130287   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:06.130287   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:06.130287   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:06.130287   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:06.130287   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:06 GMT
	I0731 23:36:06.130287   12704 round_trippers.go:580]     Audit-Id: 50fd1786-4b2a-4c1a-b69f-b8824b3a8579
	I0731 23:36:06.130287   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:06.130936   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:06.626299   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:06.626549   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:06.626549   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:06.626549   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:06.631749   12704 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:36:06.631749   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:06.631749   12704 round_trippers.go:580]     Audit-Id: c4fd4b5d-43c3-4072-901e-dd9b3591dd66
	I0731 23:36:06.631749   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:06.631749   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:06.631749   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:06.631749   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:06.631749   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:06 GMT
	I0731 23:36:06.631749   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:07.134051   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:07.134051   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:07.134051   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:07.134295   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:07.136542   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:07.136542   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:07.136542   12704 round_trippers.go:580]     Audit-Id: ab5c81c8-4e26-416e-900a-d37cdf62fa26
	I0731 23:36:07.137568   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:07.137568   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:07.137568   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:07.137568   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:07.137568   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:07 GMT
	I0731 23:36:07.137862   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:07.138516   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:36:07.633910   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:07.634103   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:07.634103   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:07.634103   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:07.637249   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:07.637324   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:07.637324   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:07.637324   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:07.637324   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:07 GMT
	I0731 23:36:07.637324   12704 round_trippers.go:580]     Audit-Id: 53062962-009a-4fb3-8046-40e505d84746
	I0731 23:36:07.637385   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:07.637385   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:07.637579   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:08.132527   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:08.132865   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:08.132865   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:08.132865   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:08.137459   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:36:08.138174   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:08.138174   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:08.138174   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:08.138174   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:08.138174   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:08 GMT
	I0731 23:36:08.138174   12704 round_trippers.go:580]     Audit-Id: 2e5c5ff8-a4b2-4cb1-bc33-41a4e1c86bea
	I0731 23:36:08.138174   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:08.138174   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:08.629790   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:08.629790   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:08.629895   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:08.629895   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:08.633348   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:08.633348   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:08.633348   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:08.633348   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:08 GMT
	I0731 23:36:08.633348   12704 round_trippers.go:580]     Audit-Id: 58d19f18-6220-4ebe-8bbe-f758c0950136
	I0731 23:36:08.633348   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:08.633348   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:08.633348   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:08.633767   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:09.128974   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:09.128974   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:09.128974   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:09.128974   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:09.132720   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:09.132720   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:09.132720   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:09.132720   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:09.132720   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:09.132720   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:09 GMT
	I0731 23:36:09.132720   12704 round_trippers.go:580]     Audit-Id: 29ad03c8-ff73-4889-91c0-521fd72a0000
	I0731 23:36:09.132720   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:09.133833   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:09.630026   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:09.630152   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:09.630152   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:09.630152   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:09.632724   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:09.632724   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:09.632724   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:09.632724   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:09.632724   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:09 GMT
	I0731 23:36:09.633606   12704 round_trippers.go:580]     Audit-Id: 229ddc08-de26-4a46-8f83-c3343837e751
	I0731 23:36:09.633606   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:09.633606   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:09.633796   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:09.633796   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:36:10.127298   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:10.127298   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:10.127519   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:10.127519   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:10.132773   12704 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:36:10.132773   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:10.132773   12704 round_trippers.go:580]     Audit-Id: 9a1a5cce-7ac0-41d2-b842-e39b350440ac
	I0731 23:36:10.132773   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:10.133484   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:10.133484   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:10.133484   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:10.133484   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:10 GMT
	I0731 23:36:10.133726   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:10.638042   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:10.638042   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:10.638042   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:10.638042   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:10.640779   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:10.640779   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:10.640779   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:10.640779   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:10 GMT
	I0731 23:36:10.640779   12704 round_trippers.go:580]     Audit-Id: e0036c41-649e-4fee-81d7-7adf903d3c03
	I0731 23:36:10.640779   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:10.640779   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:10.641813   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:10.642056   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:11.136715   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:11.137064   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:11.137064   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:11.137064   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:11.140416   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:11.141374   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:11.141374   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:11 GMT
	I0731 23:36:11.141374   12704 round_trippers.go:580]     Audit-Id: cc95cd23-cd5c-4aff-8ec4-6a8d2efc23f1
	I0731 23:36:11.141374   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:11.141374   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:11.141374   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:11.141374   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:11.141630   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:11.636982   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:11.636982   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:11.636982   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:11.636982   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:11.639552   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:11.640464   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:11.640464   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:11.640464   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:11 GMT
	I0731 23:36:11.640464   12704 round_trippers.go:580]     Audit-Id: cd071316-5d5f-4dc2-8d83-4ab89e663593
	I0731 23:36:11.640464   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:11.640464   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:11.640464   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:11.640631   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:11.641161   12704 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:36:12.138297   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:12.138573   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:12.138573   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:12.138573   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:12.142479   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:12.142479   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:12.142479   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:12.142479   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:12 GMT
	I0731 23:36:12.142479   12704 round_trippers.go:580]     Audit-Id: fec94f21-6319-41a3-ac86-5cd60d47d9e8
	I0731 23:36:12.142479   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:12.142479   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:12.142479   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:12.142479   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:12.636216   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:12.636216   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:12.636216   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:12.636216   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:12.639267   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:12.639713   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:12.639713   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:12.639713   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:12.639713   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:12.639713   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:12 GMT
	I0731 23:36:12.639713   12704 round_trippers.go:580]     Audit-Id: 15431adb-66df-4214-a516-53efcd2c4c3b
	I0731 23:36:12.639713   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:12.639997   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"604","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3396 chars]
	I0731 23:36:13.134557   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:13.134557   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.134775   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.134775   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.141535   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:36:13.141535   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.141535   12704 round_trippers.go:580]     Audit-Id: de8efe12-d9f6-453f-96c0-4609e39309f5
	I0731 23:36:13.141535   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.141535   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.141535   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.141535   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.141535   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.141867   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"633","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3323 chars]
	I0731 23:36:13.142283   12704 node_ready.go:49] node "multinode-411400-m02" has status "Ready":"True"
	I0731 23:36:13.142340   12704 node_ready.go:38] duration metric: took 28.0203238s for node "multinode-411400-m02" to be "Ready" ...
	I0731 23:36:13.142340   12704 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:36:13.142525   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods
	I0731 23:36:13.142543   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.142543   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.142543   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.151490   12704 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 23:36:13.151490   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.151490   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.151490   12704 round_trippers.go:580]     Audit-Id: 06b5b669-30d6-4f88-b350-5946328533fb
	I0731 23:36:13.151490   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.152533   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.152533   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.152533   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.155829   12704 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"633"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"427","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70370 chars]
	I0731 23:36:13.162059   12704 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.162165   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:36:13.162165   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.162165   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.162165   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.164976   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:13.164976   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.164976   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.164976   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.164976   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.164976   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.164976   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.164976   12704 round_trippers.go:580]     Audit-Id: 0dccdf94-cb22-4caf-aa5a-3bb1a64cfaa8
	I0731 23:36:13.164976   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"427","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0731 23:36:13.164976   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:36:13.164976   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.164976   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.164976   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.167481   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:13.167481   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.167481   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.167481   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.168516   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.168516   12704 round_trippers.go:580]     Audit-Id: aea8e998-56fa-4591-9a26-e78526486500
	I0731 23:36:13.168516   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.168516   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.168714   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:36:13.169090   12704 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:13.169193   12704 pod_ready.go:81] duration metric: took 7.1338ms for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.169193   12704 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.169193   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-411400
	I0731 23:36:13.169193   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.169193   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.169193   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.171969   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:13.171969   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.171969   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.171969   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.171969   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.171969   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.171969   12704 round_trippers.go:580]     Audit-Id: 47b854ac-f385-4db4-bba6-21bbd885f18a
	I0731 23:36:13.171969   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.172621   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"d1476f05-7d77-424f-b5b3-c4c29f539af6","resourceVersion":"384","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.20.56:2379","kubernetes.io/config.hash":"5ce972ac835dbc580b580a401b4d452c","kubernetes.io/config.mirror":"5ce972ac835dbc580b580a401b4d452c","kubernetes.io/config.seen":"2024-07-31T23:32:26.731474656Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0731 23:36:13.173157   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:36:13.173216   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.173216   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.173216   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.179721   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:36:13.179721   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.179721   12704 round_trippers.go:580]     Audit-Id: 77da7e88-a72a-476b-adc3-ab1c4ef0403e
	I0731 23:36:13.179721   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.179721   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.179721   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.179721   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.179721   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.179721   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:36:13.180706   12704 pod_ready.go:92] pod "etcd-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:13.180706   12704 pod_ready.go:81] duration metric: took 11.5127ms for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.180706   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.180840   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-411400
	I0731 23:36:13.180941   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.180991   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.180991   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.183239   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:13.183239   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.183239   12704 round_trippers.go:580]     Audit-Id: 39a5d87e-75f7-41d3-8745-bdffcc8dfc64
	I0731 23:36:13.183239   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.183239   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.183239   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.183239   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.183239   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.184146   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-411400","namespace":"kube-system","uid":"fd9ca41e-c7ca-416e-b00e-b6cf76e4c434","resourceVersion":"385","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.20.56:8443","kubernetes.io/config.hash":"5f6d87c3026905e576dd63c1bfb6b167","kubernetes.io/config.mirror":"5f6d87c3026905e576dd63c1bfb6b167","kubernetes.io/config.seen":"2024-07-31T23:32:26.731475956Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0731 23:36:13.184647   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:36:13.184647   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.184647   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.184647   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.187222   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:13.187222   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.187682   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.187682   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.187682   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.187682   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.187682   12704 round_trippers.go:580]     Audit-Id: 7a34ad57-faa8-49d5-b838-d15470696c9b
	I0731 23:36:13.187682   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.187858   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:36:13.188311   12704 pod_ready.go:92] pod "kube-apiserver-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:13.188311   12704 pod_ready.go:81] duration metric: took 7.6047ms for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.188311   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.188488   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400
	I0731 23:36:13.188488   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.188488   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.188488   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.190828   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:13.190828   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.190828   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.190828   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.190828   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.191359   12704 round_trippers.go:580]     Audit-Id: b5bb1e39-2628-4c8a-b8e9-bbc7a04b095d
	I0731 23:36:13.191359   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.191359   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.191536   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-411400","namespace":"kube-system","uid":"217a4087-49b2-4b74-a094-e027a51cf503","resourceVersion":"386","creationTimestamp":"2024-07-31T23:32:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.mirror":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.seen":"2024-07-31T23:32:18.716560513Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0731 23:36:13.192294   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:36:13.192294   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.192294   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.192294   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.194556   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:13.194556   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.194556   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.194556   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.194556   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.194556   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.194556   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.194556   12704 round_trippers.go:580]     Audit-Id: b5d58f58-a454-48cc-83d3-f59c118bb957
	I0731 23:36:13.195676   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:36:13.196085   12704 pod_ready.go:92] pod "kube-controller-manager-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:13.196134   12704 pod_ready.go:81] duration metric: took 7.7196ms for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.196134   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.338030   12704 request.go:629] Waited for 141.6633ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:36:13.338030   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:36:13.338030   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.338030   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.338030   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.342484   12704 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:36:13.342484   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.342484   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.342484   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.342484   12704 round_trippers.go:580]     Audit-Id: dbf08439-3391-473e-90f0-f81956b39f75
	I0731 23:36:13.342484   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.342484   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.342484   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.342484   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-chdxg","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3405391-f4cb-4ffe-8d51-d669e37d0a3b","resourceVersion":"379","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0731 23:36:13.541509   12704 request.go:629] Waited for 197.5999ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:36:13.541638   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:36:13.541638   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.541638   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.541638   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.545261   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:13.545777   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.545777   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.545777   12704 round_trippers.go:580]     Audit-Id: 8635cd5b-a0ab-4a8d-bbc2-3845120a9ba8
	I0731 23:36:13.545777   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.545777   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.545777   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.545777   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.545777   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:36:13.546626   12704 pod_ready.go:92] pod "kube-proxy-chdxg" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:13.546710   12704 pod_ready.go:81] duration metric: took 350.5721ms for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.546710   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.743533   12704 request.go:629] Waited for 196.5187ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:36:13.743753   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:36:13.743753   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.743753   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.743753   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.747351   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:13.748152   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.748152   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.748152   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.748152   12704 round_trippers.go:580]     Audit-Id: 031d1bf5-3dc4-4edc-a353-430e28664e19
	I0731 23:36:13.748152   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.748152   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.748152   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.748395   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g7tpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"c8356e2e-b324-4001-9b82-18a13b436517","resourceVersion":"610","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0731 23:36:13.947087   12704 request.go:629] Waited for 197.5833ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:13.947163   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:36:13.947312   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:13.947312   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:13.947312   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:13.953684   12704 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:36:13.953684   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:13.953684   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:13.953684   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:13.953684   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:13.953684   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:13.953684   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:13 GMT
	I0731 23:36:13.953684   12704 round_trippers.go:580]     Audit-Id: e2ec7189-fcfc-4ab6-a028-561d3a99539f
	I0731 23:36:13.954213   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"634","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3262 chars]
	I0731 23:36:13.954288   12704 pod_ready.go:92] pod "kube-proxy-g7tpl" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:13.954288   12704 pod_ready.go:81] duration metric: took 407.572ms for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:13.954831   12704 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:14.135430   12704 request.go:629] Waited for 180.2169ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:36:14.135542   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:36:14.135542   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:14.135542   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:14.135542   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:14.139419   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:14.139419   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:14.139419   12704 round_trippers.go:580]     Audit-Id: ec5aa20f-c186-495d-b110-fa0f90a86778
	I0731 23:36:14.140394   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:14.140394   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:14.140394   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:14.140394   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:14.140394   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:14 GMT
	I0731 23:36:14.140831   12704 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-411400","namespace":"kube-system","uid":"a10cf66c-3049-48d4-9ab1-8667efc59977","resourceVersion":"383","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.mirror":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.seen":"2024-07-31T23:32:26.731395457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0731 23:36:14.338789   12704 request.go:629] Waited for 196.8966ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:36:14.338889   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes/multinode-411400
	I0731 23:36:14.339027   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:14.339027   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:14.339027   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:14.341781   12704 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:36:14.342722   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:14.342722   12704 round_trippers.go:580]     Audit-Id: bc1b6720-b2d4-4431-88fc-ac29cf0a4b46
	I0731 23:36:14.342781   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:14.342781   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:14.342781   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:14.342781   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:14.342781   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:14 GMT
	I0731 23:36:14.342904   12704 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0731 23:36:14.343739   12704 pod_ready.go:92] pod "kube-scheduler-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:36:14.343739   12704 pod_ready.go:81] duration metric: took 388.9031ms for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:36:14.343739   12704 pod_ready.go:38] duration metric: took 1.201295s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:36:14.343739   12704 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:36:14.355478   12704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:36:14.380949   12704 system_svc.go:56] duration metric: took 37.2099ms WaitForService to wait for kubelet
	I0731 23:36:14.380949   12704 kubeadm.go:582] duration metric: took 29.5503883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:36:14.380949   12704 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:36:14.541964   12704 request.go:629] Waited for 160.9836ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.20.56:8443/api/v1/nodes
	I0731 23:36:14.541964   12704 round_trippers.go:463] GET https://172.17.20.56:8443/api/v1/nodes
	I0731 23:36:14.541964   12704 round_trippers.go:469] Request Headers:
	I0731 23:36:14.541964   12704 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:36:14.541964   12704 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:36:14.545638   12704 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:36:14.545638   12704 round_trippers.go:577] Response Headers:
	I0731 23:36:14.546371   12704 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:36:14.546371   12704 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:36:14 GMT
	I0731 23:36:14.546371   12704 round_trippers.go:580]     Audit-Id: f004c1bd-3eec-4ce3-98e7-83fe6080ab4e
	I0731 23:36:14.546371   12704 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:36:14.546371   12704 round_trippers.go:580]     Content-Type: application/json
	I0731 23:36:14.546371   12704 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:36:14.546824   12704 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"635"},"items":[{"metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"408","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9265 chars]
	I0731 23:36:14.547788   12704 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:36:14.547855   12704 node_conditions.go:123] node cpu capacity is 2
	I0731 23:36:14.547855   12704 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:36:14.547855   12704 node_conditions.go:123] node cpu capacity is 2
	I0731 23:36:14.547855   12704 node_conditions.go:105] duration metric: took 166.9042ms to run NodePressure ...
	I0731 23:36:14.547855   12704 start.go:241] waiting for startup goroutines ...
	I0731 23:36:14.547950   12704 start.go:255] writing updated cluster config ...
	I0731 23:36:14.561715   12704 ssh_runner.go:195] Run: rm -f paused
	I0731 23:36:14.700906   12704 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 23:36:14.706218   12704 out.go:177] * Done! kubectl is now configured to use "multinode-411400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 31 23:33:01 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:01.598606710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:33:01 multinode-411400 cri-dockerd[1322]: time="2024-07-31T23:33:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8da81f74292e903eca7b64c397eaecdd28f207f5397b5e78f9b6f8473c77a129/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 23:33:01 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:01.918789029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:33:01 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:01.919235929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:33:01 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:01.919522929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:33:01 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:01.920213328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:33:02 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:02.177077812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:33:02 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:02.177567612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:33:02 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:02.177674112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:33:02 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:02.178700712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:33:02 multinode-411400 cri-dockerd[1322]: time="2024-07-31T23:33:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a9f5c5f995787b613c70fc6139658ec0cfc874cae304f16a3f737520a55e645/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 23:33:02 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:02.586701794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:33:02 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:02.587032996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:33:02 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:02.587059996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:33:02 multinode-411400 dockerd[1431]: time="2024-07-31T23:33:02.587250996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:36:39 multinode-411400 dockerd[1431]: time="2024-07-31T23:36:39.646045741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:36:39 multinode-411400 dockerd[1431]: time="2024-07-31T23:36:39.646345643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:36:39 multinode-411400 dockerd[1431]: time="2024-07-31T23:36:39.646363743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:36:39 multinode-411400 dockerd[1431]: time="2024-07-31T23:36:39.647529650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:36:39 multinode-411400 cri-dockerd[1322]: time="2024-07-31T23:36:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3be0dbedbcbad87bffa4a65f5d2996690ce6699d0decc83870e09a871561c295/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 23:36:41 multinode-411400 cri-dockerd[1322]: time="2024-07-31T23:36:41Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 31 23:36:41 multinode-411400 dockerd[1431]: time="2024-07-31T23:36:41.290658947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:36:41 multinode-411400 dockerd[1431]: time="2024-07-31T23:36:41.290831649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:36:41 multinode-411400 dockerd[1431]: time="2024-07-31T23:36:41.290848450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:36:41 multinode-411400 dockerd[1431]: time="2024-07-31T23:36:41.291220555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	987bcd17ce9fc       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   48 seconds ago      Running             busybox                   0                   3be0dbedbcbad       busybox-fc5497c4f-4hgmz
	378f2a6593166       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   7a9f5c5f99578       coredns-7db6d8ff4d-z8gtw
	1d63a0cb77d55       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   8da81f74292e9       storage-provisioner
	284902a3378a8       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              4 minutes ago       Running             kindnet-cni               0                   7c2aeeb2eba1a       kindnet-j8slc
	07b42ba54367f       55bb025d2cfa5                                                                                         4 minutes ago       Running             kube-proxy                0                   0ae3ab4f2984f       kube-proxy-chdxg
	534fd9010fca6       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   74068ed5155bd       etcd-multinode-411400
	945a9963cd1c6       76932a3b37d7e                                                                                         5 minutes ago       Running             kube-controller-manager   0                   785da79d42d73       kube-controller-manager-multinode-411400
	54a3651cfe8b0       1f6d574d502f3                                                                                         5 minutes ago       Running             kube-apiserver            0                   78312ba260a79       kube-apiserver-multinode-411400
	6ce3944d7d13a       3edc18e7b7672                                                                                         5 minutes ago       Running             kube-scheduler            0                   68e7a182b5fc9       kube-scheduler-multinode-411400
	
	
	==> coredns [378f2a659316] <==
	[INFO] 10.244.0.3:39896 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110401s
	[INFO] 10.244.1.2:48843 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143802s
	[INFO] 10.244.1.2:54279 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000210003s
	[INFO] 10.244.1.2:45666 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213203s
	[INFO] 10.244.1.2:52392 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110902s
	[INFO] 10.244.1.2:52229 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000206603s
	[INFO] 10.244.1.2:57725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054401s
	[INFO] 10.244.1.2:57825 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073201s
	[INFO] 10.244.1.2:50190 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192703s
	[INFO] 10.244.0.3:41508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103802s
	[INFO] 10.244.0.3:36262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000628s
	[INFO] 10.244.0.3:49001 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000185403s
	[INFO] 10.244.0.3:55139 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144002s
	[INFO] 10.244.1.2:48028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102402s
	[INFO] 10.244.1.2:43656 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000603s
	[INFO] 10.244.1.2:53475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128602s
	[INFO] 10.244.1.2:45631 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134502s
	[INFO] 10.244.0.3:56422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115701s
	[INFO] 10.244.0.3:46466 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000212103s
	[INFO] 10.244.0.3:56888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131002s
	[INFO] 10.244.0.3:54485 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000138902s
	[INFO] 10.244.1.2:54884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252204s
	[INFO] 10.244.1.2:42796 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207503s
	[INFO] 10.244.1.2:44407 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073801s
	[INFO] 10.244.1.2:60271 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000059001s
	
	
	==> describe nodes <==
	Name:               multinode-411400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-411400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-411400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T23_32_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-411400
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:37:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:37:01 +0000   Wed, 31 Jul 2024 23:32:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:37:01 +0000   Wed, 31 Jul 2024 23:32:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:37:01 +0000   Wed, 31 Jul 2024 23:32:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:37:01 +0000   Wed, 31 Jul 2024 23:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.20.56
	  Hostname:    multinode-411400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f90fcde8e48e4ef9a683e22ab79b6e31
	  System UUID:                64986569-631f-4c4a-a895-51aa6b031756
	  Boot ID:                    2b31b71b-cb69-4e59-9a36-e97a77a0e67c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hgmz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-7db6d8ff4d-z8gtw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m49s
	  kube-system                 etcd-multinode-411400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kindnet-j8slc                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m49s
	  kube-system                 kube-apiserver-multinode-411400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-multinode-411400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-proxy-chdxg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-scheduler-multinode-411400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m46s  kube-proxy       
	  Normal  Starting                 5m4s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m4s   kubelet          Node multinode-411400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s   kubelet          Node multinode-411400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s   kubelet          Node multinode-411400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m50s  node-controller  Node multinode-411400 event: Registered Node multinode-411400 in Controller
	  Normal  NodeReady                4m30s  kubelet          Node multinode-411400 status is now: NodeReady
	
	
	Name:               multinode-411400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-411400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-411400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T23_35_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:35:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-411400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:37:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:36:45 +0000   Wed, 31 Jul 2024 23:35:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:36:45 +0000   Wed, 31 Jul 2024 23:35:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:36:45 +0000   Wed, 31 Jul 2024 23:35:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:36:45 +0000   Wed, 31 Jul 2024 23:36:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.28.42
	  Hostname:    multinode-411400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2370c20b96943e8afe1eaf1cc6c3b53
	  System UUID:                cdb6ffbb-7e6a-b048-96cd-deaf1ec1b465
	  Boot ID:                    ccb52efc-83f2-439c-9d05-f4de2c360877
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lxslb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-bgnqq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      107s
	  kube-system                 kube-proxy-g7tpl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  107s (x2 over 107s)  kubelet          Node multinode-411400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x2 over 107s)  kubelet          Node multinode-411400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x2 over 107s)  kubelet          Node multinode-411400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           105s                 node-controller  Node multinode-411400-m02 event: Registered Node multinode-411400-m02 in Controller
	  Normal  NodeReady                77s                  kubelet          Node multinode-411400-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.946938] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 23:31] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.183512] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[ +30.514996] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.111000] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.506229] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	[  +0.207199] systemd-fstab-generator[1047]: Ignoring "noauto" option for root device
	[  +0.228046] systemd-fstab-generator[1061]: Ignoring "noauto" option for root device
	[  +2.823917] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.176078] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.184166] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.278472] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[Jul31 23:32] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +0.106605] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.745825] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.557484] systemd-fstab-generator[1872]: Ignoring "noauto" option for root device
	[  +0.100643] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.543520] systemd-fstab-generator[2275]: Ignoring "noauto" option for root device
	[  +0.149498] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.507110] systemd-fstab-generator[2466]: Ignoring "noauto" option for root device
	[  +0.180777] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.479331] kauditd_printk_skb: 51 callbacks suppressed
	[Jul31 23:36] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [534fd9010fca] <==
	{"level":"info","ts":"2024-07-31T23:32:21.87452Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-07-31T23:32:47.532593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.216393ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18331498810279177902 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-multinode-411400\" mod_revision:304 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-multinode-411400\" value_size:6265 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-multinode-411400\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T23:32:47.532749Z","caller":"traceutil/trace.go:171","msg":"trace[1587161400] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:400; }","duration":"298.513466ms","start":"2024-07-31T23:32:47.234222Z","end":"2024-07-31T23:32:47.532736Z","steps":["trace[1587161400] 'read index received'  (duration: 138.906879ms)","trace[1587161400] 'applied index is now lower than readState.Index'  (duration: 159.605987ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:32:47.532911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.690365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-411400\" ","response":"range_response_count:1 size:4485"}
	{"level":"info","ts":"2024-07-31T23:32:47.533067Z","caller":"traceutil/trace.go:171","msg":"trace[670735472] range","detail":"{range_begin:/registry/minions/multinode-411400; range_end:; response_count:1; response_revision:387; }","duration":"298.866865ms","start":"2024-07-31T23:32:47.234181Z","end":"2024-07-31T23:32:47.533048Z","steps":["trace[670735472] 'agreement among raft nodes before linearized reading'  (duration: 298.598966ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:32:47.533433Z","caller":"traceutil/trace.go:171","msg":"trace[863097965] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"553.677316ms","start":"2024-07-31T23:32:46.979744Z","end":"2024-07-31T23:32:47.533421Z","steps":["trace[863097965] 'process raft request'  (duration: 393.362432ms)","trace[863097965] 'compare'  (duration: 157.517396ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:32:47.53349Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T23:32:46.979723Z","time spent":"553.732816ms","remote":"127.0.0.1:33716","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6340,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-multinode-411400\" mod_revision:304 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-multinode-411400\" value_size:6265 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-multinode-411400\" > >"}
	{"level":"info","ts":"2024-07-31T23:32:47.533785Z","caller":"traceutil/trace.go:171","msg":"trace[1732545105] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"353.50312ms","start":"2024-07-31T23:32:47.180273Z","end":"2024-07-31T23:32:47.533777Z","steps":["trace[1732545105] 'process raft request'  (duration: 352.417725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T23:32:47.53408Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T23:32:47.180252Z","time spent":"353.55362ms","remote":"127.0.0.1:33786","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":552,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-411400\" mod_revision:281 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-411400\" value_size:495 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-411400\" > >"}
	{"level":"warn","ts":"2024-07-31T23:32:49.39182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.789647ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18331498810279177918 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" value_size:1050 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T23:32:49.392252Z","caller":"traceutil/trace.go:171","msg":"trace[1069140975] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:405; }","duration":"163.081801ms","start":"2024-07-31T23:32:49.229056Z","end":"2024-07-31T23:32:49.392138Z","steps":["trace[1069140975] 'read index received'  (duration: 54.1µs)","trace[1069140975] 'applied index is now lower than readState.Index'  (duration: 163.026401ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:32:49.39256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.651798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-411400\" ","response":"range_response_count:1 size:4485"}
	{"level":"info","ts":"2024-07-31T23:32:49.392653Z","caller":"traceutil/trace.go:171","msg":"trace[1390010673] range","detail":"{range_begin:/registry/minions/multinode-411400; range_end:; response_count:1; response_revision:391; }","duration":"163.778298ms","start":"2024-07-31T23:32:49.228866Z","end":"2024-07-31T23:32:49.392644Z","steps":["trace[1390010673] 'agreement among raft nodes before linearized reading'  (duration: 163.504699ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:32:49.392912Z","caller":"traceutil/trace.go:171","msg":"trace[468662800] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"466.198359ms","start":"2024-07-31T23:32:48.926703Z","end":"2024-07-31T23:32:49.392902Z","steps":["trace[468662800] 'process raft request'  (duration: 287.278016ms)","trace[468662800] 'compare'  (duration: 176.336653ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:32:49.393241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T23:32:48.926688Z","time spent":"466.422958ms","remote":"127.0.0.1:33846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1122,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" value_size:1050 >> failure:<>"}
	{"level":"info","ts":"2024-07-31T23:33:16.522018Z","caller":"traceutil/trace.go:171","msg":"trace[2136383661] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"137.998534ms","start":"2024-07-31T23:33:16.383998Z","end":"2024-07-31T23:33:16.521997Z","steps":["trace[2136383661] 'process raft request'  (duration: 137.83383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T23:35:37.873116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.165953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-31T23:35:37.873545Z","caller":"traceutil/trace.go:171","msg":"trace[1327481851] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:554; }","duration":"178.614557ms","start":"2024-07-31T23:35:37.694899Z","end":"2024-07-31T23:35:37.873513Z","steps":["trace[1327481851] 'range keys from in-memory index tree'  (duration: 177.79325ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:35:54.665126Z","caller":"traceutil/trace.go:171","msg":"trace[469450526] linearizableReadLoop","detail":"{readStateIndex:659; appliedIndex:658; }","duration":"238.959206ms","start":"2024-07-31T23:35:54.426134Z","end":"2024-07-31T23:35:54.665093Z","steps":["trace[469450526] 'read index received'  (duration: 151.079079ms)","trace[469450526] 'applied index is now lower than readState.Index'  (duration: 87.879427ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T23:35:54.665723Z","caller":"traceutil/trace.go:171","msg":"trace[1978776365] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"378.269702ms","start":"2024-07-31T23:35:54.287444Z","end":"2024-07-31T23:35:54.665713Z","steps":["trace[1978776365] 'process raft request'  (duration: 289.80967ms)","trace[1978776365] 'compare'  (duration: 87.657426ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:35:54.665872Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T23:35:54.287423Z","time spent":"378.344002ms","remote":"127.0.0.1:33786","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":569,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-411400-m02\" mod_revision:584 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-411400-m02\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-411400-m02\" > >"}
	{"level":"warn","ts":"2024-07-31T23:35:54.666317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.214216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T23:35:54.666365Z","caller":"traceutil/trace.go:171","msg":"trace[1189656143] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:605; }","duration":"240.537018ms","start":"2024-07-31T23:35:54.42582Z","end":"2024-07-31T23:35:54.666357Z","steps":["trace[1189656143] 'agreement among raft nodes before linearized reading'  (duration: 240.449318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T23:35:54.666776Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.470682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T23:35:54.666822Z","caller":"traceutil/trace.go:171","msg":"trace[1984385497] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:605; }","duration":"221.531682ms","start":"2024-07-31T23:35:54.445283Z","end":"2024-07-31T23:35:54.666815Z","steps":["trace[1984385497] 'agreement among raft nodes before linearized reading'  (duration: 221.469982ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:37:30 up 7 min,  0 users,  load average: 0.75, 0.43, 0.19
	Linux multinode-411400 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [284902a3378a] <==
	I0731 23:36:20.930614       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:36:30.936364       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:36:30.937249       1 main.go:299] handling current node
	I0731 23:36:30.939471       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:36:30.939520       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:36:40.935435       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:36:40.935490       1 main.go:299] handling current node
	I0731 23:36:40.935509       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:36:40.935535       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:36:50.927375       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:36:50.927472       1 main.go:299] handling current node
	I0731 23:36:50.927493       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:36:50.927500       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:37:00.935092       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:37:00.935466       1 main.go:299] handling current node
	I0731 23:37:00.935600       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:37:00.935613       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:37:10.935359       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:37:10.935491       1 main.go:299] handling current node
	I0731 23:37:10.935513       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:37:10.935522       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:37:20.936072       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:37:20.936478       1 main.go:299] handling current node
	I0731 23:37:20.936659       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:37:20.936829       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [54a3651cfe8b] <==
	W0731 23:32:25.730685       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.20.56]
	I0731 23:32:25.732117       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 23:32:25.742143       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 23:32:26.368160       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:32:26.757653       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:32:26.796994       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 23:32:26.816307       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:32:40.146417       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 23:32:41.062156       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0731 23:32:47.538985       1 trace.go:236] Trace[840359866]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4688ae7e-4a85-47a3-867f-eeab21ff4110,client:172.17.20.56,api-group:,api-version:v1,name:kube-controller-manager-multinode-411400,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400/status,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PATCH (31-Jul-2024 23:32:46.950) (total time: 585ms):
	Trace[840359866]: ["GuaranteedUpdate etcd3" audit-id:4688ae7e-4a85-47a3-867f-eeab21ff4110,key:/pods/kube-system/kube-controller-manager-multinode-411400,type:*core.Pod,resource:pods 588ms (23:32:46.950)
	Trace[840359866]:  ---"Txn call completed" 556ms (23:32:47.535)]
	Trace[840359866]: ---"About to check admission control" 27ms (23:32:46.978)
	Trace[840359866]: ---"Object stored in database" 557ms (23:32:47.535)
	Trace[840359866]: [585.534861ms] [585.534861ms] END
	E0731 23:36:45.301538       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55312: use of closed network connection
	E0731 23:36:45.813758       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55315: use of closed network connection
	E0731 23:36:46.354370       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55317: use of closed network connection
	E0731 23:36:46.847816       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55319: use of closed network connection
	E0731 23:36:47.350631       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55321: use of closed network connection
	E0731 23:36:47.849706       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55323: use of closed network connection
	E0731 23:36:48.800617       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55326: use of closed network connection
	E0731 23:36:59.296136       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55328: use of closed network connection
	E0731 23:36:59.785721       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55331: use of closed network connection
	E0731 23:37:10.278320       1 conn.go:339] Error on socket receive: read tcp 172.17.20.56:8443->172.17.16.1:55333: use of closed network connection
	
	
	==> kube-controller-manager [945a9963cd1c] <==
	I0731 23:32:41.314804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.158523283s"
	I0731 23:32:41.463216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="148.353377ms"
	I0731 23:32:41.551688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.368631ms"
	I0731 23:32:41.555256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.444174ms"
	I0731 23:32:41.921703       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="156.401516ms"
	I0731 23:32:41.980732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.960354ms"
	I0731 23:32:42.007122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.233405ms"
	I0731 23:32:42.007820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.1µs"
	I0731 23:33:01.059157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.1µs"
	I0731 23:33:01.105770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.1µs"
	I0731 23:33:03.073231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.802µs"
	I0731 23:33:03.124873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.777744ms"
	I0731 23:33:03.126751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.301µs"
	I0731 23:33:05.072876       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0731 23:35:43.850887       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-411400-m02\" does not exist"
	I0731 23:35:43.875150       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-411400-m02" podCIDRs=["10.244.1.0/24"]
	I0731 23:35:45.099078       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-411400-m02"
	I0731 23:36:13.159760       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:36:39.043225       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.887684ms"
	I0731 23:36:39.085395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.089769ms"
	I0731 23:36:39.085716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.201µs"
	I0731 23:36:42.101445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.824663ms"
	I0731 23:36:42.101837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.5µs"
	I0731 23:36:42.303764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.648117ms"
	I0731 23:36:42.304221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32µs"
	
	
	==> kube-proxy [07b42ba54367] <==
	I0731 23:32:43.296545       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:32:43.313426       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.20.56"]
	I0731 23:32:43.376657       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:32:43.376767       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:32:43.376822       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:32:43.383647       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:32:43.384448       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:32:43.384541       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:32:43.386410       1 config.go:192] "Starting service config controller"
	I0731 23:32:43.386452       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:32:43.386479       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:32:43.386624       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:32:43.387800       1 config.go:319] "Starting node config controller"
	I0731 23:32:43.387837       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:32:43.487419       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:32:43.488133       1 shared_informer.go:320] Caches are synced for node config
	I0731 23:32:43.487437       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6ce3944d7d13] <==
	W0731 23:32:24.385399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 23:32:24.386124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 23:32:24.475869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:32:24.476222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 23:32:24.551748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:32:24.552706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:32:24.652807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:32:24.652916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:32:24.754982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 23:32:24.755242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 23:32:24.795115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:32:24.795160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 23:32:24.809824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 23:32:24.809992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 23:32:24.926720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 23:32:24.927414       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 23:32:24.927383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:32:24.927749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 23:32:24.936525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:32:24.936549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:32:24.979298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:32:24.979424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 23:32:25.030175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 23:32:25.030225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0731 23:32:26.709053       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 23:33:26 multinode-411400 kubelet[2282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:33:26 multinode-411400 kubelet[2282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:34:26 multinode-411400 kubelet[2282]: E0731 23:34:26.936584    2282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:34:26 multinode-411400 kubelet[2282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:34:26 multinode-411400 kubelet[2282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:34:26 multinode-411400 kubelet[2282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:34:26 multinode-411400 kubelet[2282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:35:26 multinode-411400 kubelet[2282]: E0731 23:35:26.934209    2282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:35:26 multinode-411400 kubelet[2282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:35:26 multinode-411400 kubelet[2282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:35:26 multinode-411400 kubelet[2282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:35:26 multinode-411400 kubelet[2282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:36:26 multinode-411400 kubelet[2282]: E0731 23:36:26.934405    2282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:36:26 multinode-411400 kubelet[2282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:36:26 multinode-411400 kubelet[2282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:36:26 multinode-411400 kubelet[2282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:36:26 multinode-411400 kubelet[2282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:36:39 multinode-411400 kubelet[2282]: I0731 23:36:39.027803    2282 topology_manager.go:215] "Topology Admit Handler" podUID="5430f4af-5b97-4c7d-90cc-53926f8d496b" podNamespace="default" podName="busybox-fc5497c4f-4hgmz"
	Jul 31 23:36:39 multinode-411400 kubelet[2282]: I0731 23:36:39.199195    2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5rn\" (UniqueName: \"kubernetes.io/projected/5430f4af-5b97-4c7d-90cc-53926f8d496b-kube-api-access-qs5rn\") pod \"busybox-fc5497c4f-4hgmz\" (UID: \"5430f4af-5b97-4c7d-90cc-53926f8d496b\") " pod="default/busybox-fc5497c4f-4hgmz"
	Jul 31 23:36:42 multinode-411400 kubelet[2282]: I0731 23:36:42.089677    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-4hgmz" podStartSLOduration=2.87009402 podStartE2EDuration="4.089655715s" podCreationTimestamp="2024-07-31 23:36:38 +0000 UTC" firstStartedPulling="2024-07-31 23:36:39.886538995 +0000 UTC m=+253.262721026" lastFinishedPulling="2024-07-31 23:36:41.10610069 +0000 UTC m=+254.482282721" observedRunningTime="2024-07-31 23:36:42.089444912 +0000 UTC m=+255.465627043" watchObservedRunningTime="2024-07-31 23:36:42.089655715 +0000 UTC m=+255.465837746"
	Jul 31 23:37:26 multinode-411400 kubelet[2282]: E0731 23:37:26.935526    2282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:37:26 multinode-411400 kubelet[2282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:37:26 multinode-411400 kubelet[2282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:37:26 multinode-411400 kubelet[2282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:37:26 multinode-411400 kubelet[2282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:37:22.242462    6260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-411400 -n multinode-411400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-411400 -n multinode-411400: (12.2546845s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-411400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (56.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (470.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-411400
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-411400
E0731 23:52:53.190777   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 23:53:16.632835   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-411400: (1m36.6135344s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-411400 --wait=true -v=8 --alsologtostderr
E0731 23:55:13.428762   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 23:57:53.202923   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-411400 --wait=true -v=8 --alsologtostderr: exit status 1 (5m36.6178413s)

                                                
                                                
-- stdout --
	* [multinode-411400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-411400" primary control-plane node in "multinode-411400" cluster
	* Restarting existing hyperv VM for "multinode-411400" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-411400-m02" worker node in "multinode-411400" cluster
	* Restarting existing hyperv VM for "multinode-411400-m02" ...
	* Found network options:
	  - NO_PROXY=172.17.27.27
	  - NO_PROXY=172.17.27.27
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	  - env NO_PROXY=172.17.27.27
	* Verifying Kubernetes components...
	
	* Starting "multinode-411400-m03" worker node in "multinode-411400" cluster
	* Restarting existing hyperv VM for "multinode-411400-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:53:48.144319    9020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0731 23:53:48.303046    9020 out.go:291] Setting OutFile to fd 1580 ...
	I0731 23:53:48.304668    9020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:53:48.304668    9020 out.go:304] Setting ErrFile to fd 1560...
	I0731 23:53:48.304668    9020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:53:48.333709    9020 out.go:298] Setting JSON to false
	I0731 23:53:48.337778    9020 start.go:129] hostinfo: {"hostname":"minikube6","uptime":545969,"bootTime":1721924058,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 23:53:48.337778    9020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 23:53:48.432273    9020 out.go:177] * [multinode-411400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 23:53:48.505561    9020 notify.go:220] Checking for updates...
	I0731 23:53:48.580031    9020 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:53:48.718155    9020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:53:48.761871    9020 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 23:53:48.855845    9020 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:53:49.014279    9020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:53:49.024644    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:53:49.024644    9020 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:53:54.504269    9020 out.go:177] * Using the hyperv driver based on existing profile
	I0731 23:53:54.560422    9020 start.go:297] selected driver: hyperv
	I0731 23:53:54.561486    9020 start.go:901] validating driver "hyperv" against &{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:53:54.561831    9020 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:53:54.615188    9020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:53:54.615188    9020 cni.go:84] Creating CNI manager for ""
	I0731 23:53:54.615188    9020 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:53:54.615707    9020 start.go:340] cluster config:
	{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fa
lse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:53:54.615820    9020 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:53:54.747039    9020 out.go:177] * Starting "multinode-411400" primary control-plane node in "multinode-411400" cluster
	I0731 23:53:54.805142    9020 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:53:54.805833    9020 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 23:53:54.806321    9020 cache.go:56] Caching tarball of preloaded images
	I0731 23:53:54.806474    9020 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 23:53:54.807145    9020 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 23:53:54.807221    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:53:54.810284    9020 start.go:360] acquireMachinesLock for multinode-411400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:53:54.810387    9020 start.go:364] duration metric: took 102.2µs to acquireMachinesLock for "multinode-411400"
	I0731 23:53:54.810685    9020 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:53:54.810851    9020 fix.go:54] fixHost starting: 
	I0731 23:53:54.812038    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:53:57.486356    9020 main.go:141] libmachine: [stdout =====>] : Off
	
	I0731 23:53:57.486356    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:53:57.486356    9020 fix.go:112] recreateIfNeeded on multinode-411400: state=Stopped err=<nil>
	W0731 23:53:57.486356    9020 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:53:57.491929    9020 out.go:177] * Restarting existing hyperv VM for "multinode-411400" ...
	I0731 23:53:57.495953    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-411400
	I0731 23:54:00.411013    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:00.411616    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:00.411616    9020 main.go:141] libmachine: Waiting for host to start...
	I0731 23:54:00.411616    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:02.547435    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:02.547822    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:02.547943    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:04.966261    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:04.966261    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:05.973127    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:08.153089    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:08.153925    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:08.154053    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:10.585269    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:10.585269    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:11.600141    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:13.678015    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:13.678086    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:13.678086    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:16.097463    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:16.097463    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:17.108434    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:19.228962    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:19.229129    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:19.229129    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:21.666842    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:21.667179    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:22.678852    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:24.819658    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:24.820787    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:24.820787    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:27.245507    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:27.245616    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:27.248491    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:29.306823    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:29.306823    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:29.307614    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:31.698157    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:31.698939    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:31.698939    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:54:31.701792    9020 machine.go:94] provisionDockerMachine start ...
	I0731 23:54:31.701792    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:33.681441    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:33.681441    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:33.682528    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:36.061380    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:36.061380    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:36.066983    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:54:36.067662    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:54:36.067662    9020 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:54:36.194168    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 23:54:36.194168    9020 buildroot.go:166] provisioning hostname "multinode-411400"
	I0731 23:54:36.194168    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:38.196247    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:38.196247    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:38.196808    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:40.590494    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:40.591287    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:40.596466    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:54:40.597009    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:54:40.597233    9020 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-411400 && echo "multinode-411400" | sudo tee /etc/hostname
	I0731 23:54:40.738821    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-411400
	
	I0731 23:54:40.738917    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:42.751663    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:42.752048    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:42.752048    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:45.123560    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:45.124329    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:45.130270    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:54:45.130811    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:54:45.130811    9020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-411400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-411400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-411400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:54:45.266946    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:54:45.267008    9020 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 23:54:45.267065    9020 buildroot.go:174] setting up certificates
	I0731 23:54:45.267065    9020 provision.go:84] configureAuth start
	I0731 23:54:45.267149    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:47.285647    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:47.285647    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:47.286422    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:49.741804    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:49.742765    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:49.742765    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:51.757854    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:51.758198    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:51.758198    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:54.169048    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:54.169048    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:54.169048    9020 provision.go:143] copyHostCerts
	I0731 23:54:54.169048    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 23:54:54.169838    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 23:54:54.169931    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 23:54:54.170209    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 23:54:54.172171    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 23:54:54.172325    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 23:54:54.172325    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 23:54:54.172961    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 23:54:54.174445    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 23:54:54.174445    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 23:54:54.174445    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 23:54:54.175276    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 23:54:54.176022    9020 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-411400 san=[127.0.0.1 172.17.27.27 localhost minikube multinode-411400]
	I0731 23:54:54.288634    9020 provision.go:177] copyRemoteCerts
	I0731 23:54:54.298591    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:54:54.298591    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:56.295797    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:56.296599    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:56.296685    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:58.692627    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:58.692627    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:58.692871    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:54:58.793672    9020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4950244s)
	I0731 23:54:58.793832    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 23:54:58.794375    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 23:54:58.843786    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 23:54:58.844781    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 23:54:58.886908    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 23:54:58.886908    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 23:54:58.937725    9020 provision.go:87] duration metric: took 13.670417s to configureAuth
	I0731 23:54:58.937790    9020 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:54:58.938698    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:54:58.938867    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:00.979002    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:00.979246    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:00.979314    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:03.404415    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:03.404603    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:03.409176    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:03.410037    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:03.410037    9020 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 23:55:03.533840    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 23:55:03.533955    9020 buildroot.go:70] root file system type: tmpfs
	I0731 23:55:03.534148    9020 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 23:55:03.534224    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:05.575663    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:05.575663    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:05.576328    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:07.999632    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:07.999696    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:08.004983    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:08.005048    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:08.005589    9020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 23:55:08.174742    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 23:55:08.174924    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:10.215243    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:10.216337    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:10.216442    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:12.699027    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:12.699027    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:12.705182    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:12.705364    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:12.705902    9020 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 23:55:15.163334    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 23:55:15.163334    9020 machine.go:97] duration metric: took 43.4609895s to provisionDockerMachine
	I0731 23:55:15.164008    9020 start.go:293] postStartSetup for "multinode-411400" (driver="hyperv")
	I0731 23:55:15.164008    9020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:55:15.175663    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:55:15.175663    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:17.215217    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:17.215722    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:17.216004    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:19.651516    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:19.652075    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:19.652528    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:55:19.756942    9020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5812209s)
	I0731 23:55:19.769764    9020 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:55:19.776813    9020 command_runner.go:130] > NAME=Buildroot
	I0731 23:55:19.776977    9020 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 23:55:19.776977    9020 command_runner.go:130] > ID=buildroot
	I0731 23:55:19.776977    9020 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 23:55:19.776977    9020 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 23:55:19.777089    9020 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:55:19.777089    9020 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 23:55:19.777626    9020 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 23:55:19.778757    9020 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 23:55:19.778833    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 23:55:19.789315    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:55:19.806910    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 23:55:19.851172    9020 start.go:296] duration metric: took 4.6871052s for postStartSetup
	I0731 23:55:19.851301    9020 fix.go:56] duration metric: took 1m25.0393722s for fixHost
	I0731 23:55:19.851386    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:21.943769    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:21.943769    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:21.944333    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:24.386522    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:24.386730    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:24.391916    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:24.392597    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:24.392597    9020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 23:55:24.507809    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722470124.528439864
	
	I0731 23:55:24.507809    9020 fix.go:216] guest clock: 1722470124.528439864
	I0731 23:55:24.507809    9020 fix.go:229] Guest: 2024-07-31 23:55:24.528439864 +0000 UTC Remote: 2024-07-31 23:55:19.8513011 +0000 UTC m=+91.814596601 (delta=4.677138764s)
	I0731 23:55:24.507809    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:26.612699    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:26.612699    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:26.613451    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:29.092016    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:29.092290    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:29.098597    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:29.099410    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:29.099410    9020 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722470124
	I0731 23:55:29.240977    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 23:55:24 UTC 2024
	
	I0731 23:55:29.240977    9020 fix.go:236] clock set: Wed Jul 31 23:55:24 UTC 2024
	 (err=<nil>)
	I0731 23:55:29.240977    9020 start.go:83] releasing machines lock for "multinode-411400", held for 1m34.4293944s
	I0731 23:55:29.240977    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:31.372781    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:31.373906    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:31.373906    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:33.861029    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:33.862019    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:33.866263    9020 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 23:55:33.866359    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:33.876325    9020 ssh_runner.go:195] Run: cat /version.json
	I0731 23:55:33.876325    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:35.994656    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:35.994656    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:35.994815    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:35.994815    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:35.994815    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:35.994815    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:38.571494    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:38.572480    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:38.572480    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:55:38.596355    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:38.596407    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:38.596871    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:55:38.659327    9020 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0731 23:55:38.659327    9020 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.7930034s)
	W0731 23:55:38.659327    9020 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 23:55:38.691780    9020 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 23:55:38.691780    9020 ssh_runner.go:235] Completed: cat /version.json: (4.8153931s)
	I0731 23:55:38.703501    9020 ssh_runner.go:195] Run: systemctl --version
	I0731 23:55:38.712127    9020 command_runner.go:130] > systemd 252 (252)
	I0731 23:55:38.712427    9020 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 23:55:38.726256    9020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:55:38.734795    9020 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 23:55:38.735416    9020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:55:38.748043    9020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:55:38.776538    9020 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0731 23:55:38.776802    9020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:55:38.776854    9020 start.go:495] detecting cgroup driver to use...
	I0731 23:55:38.776854    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0731 23:55:38.795453    9020 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 23:55:38.795453    9020 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 23:55:38.811536    9020 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0731 23:55:38.824082    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 23:55:38.855355    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 23:55:38.874587    9020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 23:55:38.886896    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 23:55:38.918370    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:55:38.952490    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 23:55:38.984140    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:55:39.014888    9020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:55:39.045416    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 23:55:39.075835    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 23:55:39.105592    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 23:55:39.136884    9020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:55:39.156637    9020 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 23:55:39.168320    9020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:55:39.196791    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:39.391472    9020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 23:55:39.423944    9020 start.go:495] detecting cgroup driver to use...
	I0731 23:55:39.435390    9020 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 23:55:39.458952    9020 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0731 23:55:39.458952    9020 command_runner.go:130] > [Unit]
	I0731 23:55:39.458952    9020 command_runner.go:130] > Description=Docker Application Container Engine
	I0731 23:55:39.458952    9020 command_runner.go:130] > Documentation=https://docs.docker.com
	I0731 23:55:39.458952    9020 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0731 23:55:39.458952    9020 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0731 23:55:39.458952    9020 command_runner.go:130] > StartLimitBurst=3
	I0731 23:55:39.458952    9020 command_runner.go:130] > StartLimitIntervalSec=60
	I0731 23:55:39.458952    9020 command_runner.go:130] > [Service]
	I0731 23:55:39.458952    9020 command_runner.go:130] > Type=notify
	I0731 23:55:39.458952    9020 command_runner.go:130] > Restart=on-failure
	I0731 23:55:39.458952    9020 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0731 23:55:39.458952    9020 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0731 23:55:39.458952    9020 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0731 23:55:39.459520    9020 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0731 23:55:39.459520    9020 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0731 23:55:39.459599    9020 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0731 23:55:39.459599    9020 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0731 23:55:39.459599    9020 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0731 23:55:39.459599    9020 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0731 23:55:39.459599    9020 command_runner.go:130] > ExecStart=
	I0731 23:55:39.459599    9020 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0731 23:55:39.459599    9020 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0731 23:55:39.459599    9020 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0731 23:55:39.459599    9020 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0731 23:55:39.459599    9020 command_runner.go:130] > LimitNOFILE=infinity
	I0731 23:55:39.459599    9020 command_runner.go:130] > LimitNPROC=infinity
	I0731 23:55:39.459599    9020 command_runner.go:130] > LimitCORE=infinity
	I0731 23:55:39.459599    9020 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0731 23:55:39.459599    9020 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0731 23:55:39.459599    9020 command_runner.go:130] > TasksMax=infinity
	I0731 23:55:39.459599    9020 command_runner.go:130] > TimeoutStartSec=0
	I0731 23:55:39.459599    9020 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0731 23:55:39.459599    9020 command_runner.go:130] > Delegate=yes
	I0731 23:55:39.459599    9020 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0731 23:55:39.459599    9020 command_runner.go:130] > KillMode=process
	I0731 23:55:39.459599    9020 command_runner.go:130] > [Install]
	I0731 23:55:39.459599    9020 command_runner.go:130] > WantedBy=multi-user.target
	I0731 23:55:39.472871    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:55:39.502315    9020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:55:39.555548    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:55:39.593597    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:55:39.625818    9020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 23:55:39.695261    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:55:39.716485    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:55:39.750433    9020 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0731 23:55:39.763453    9020 ssh_runner.go:195] Run: which cri-dockerd
	I0731 23:55:39.768931    9020 command_runner.go:130] > /usr/bin/cri-dockerd
	I0731 23:55:39.781103    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 23:55:39.798686    9020 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 23:55:39.838571    9020 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 23:55:40.025101    9020 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 23:55:40.184564    9020 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 23:55:40.184564    9020 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 23:55:40.225612    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:40.404240    9020 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 23:55:43.058791    9020 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.654378s)
	I0731 23:55:43.069789    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 23:55:43.104525    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:55:43.139825    9020 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 23:55:43.312524    9020 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 23:55:43.476549    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:43.640784    9020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 23:55:43.677549    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:55:43.713360    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:43.889437    9020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 23:55:43.982972    9020 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 23:55:43.994548    9020 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 23:55:44.003307    9020 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0731 23:55:44.003380    9020 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 23:55:44.003380    9020 command_runner.go:130] > Device: 0,22	Inode: 865         Links: 1
	I0731 23:55:44.003380    9020 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0731 23:55:44.003380    9020 command_runner.go:130] > Access: 2024-07-31 23:55:43.930593337 +0000
	I0731 23:55:44.003380    9020 command_runner.go:130] > Modify: 2024-07-31 23:55:43.930593337 +0000
	I0731 23:55:44.003380    9020 command_runner.go:130] > Change: 2024-07-31 23:55:43.933593361 +0000
	I0731 23:55:44.003380    9020 command_runner.go:130] >  Birth: -
	I0731 23:55:44.003451    9020 start.go:563] Will wait 60s for crictl version
	I0731 23:55:44.015556    9020 ssh_runner.go:195] Run: which crictl
	I0731 23:55:44.021044    9020 command_runner.go:130] > /usr/bin/crictl
	I0731 23:55:44.031791    9020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:55:44.091714    9020 command_runner.go:130] > Version:  0.1.0
	I0731 23:55:44.091881    9020 command_runner.go:130] > RuntimeName:  docker
	I0731 23:55:44.091881    9020 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0731 23:55:44.091881    9020 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 23:55:44.091963    9020 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 23:55:44.100288    9020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:55:44.133383    9020 command_runner.go:130] > 27.1.1
	I0731 23:55:44.143490    9020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:55:44.170758    9020 command_runner.go:130] > 27.1.1
	I0731 23:55:44.175213    9020 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 23:55:44.175808    9020 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 23:55:44.179569    9020 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 23:55:44.179569    9020 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 23:55:44.179569    9020 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 23:55:44.179569    9020 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 23:55:44.182638    9020 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 23:55:44.182638    9020 ip.go:210] interface addr: 172.17.16.1/20
	I0731 23:55:44.192311    9020 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 23:55:44.198398    9020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:55:44.218156    9020 kubeadm.go:883] updating cluster {Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.27.27 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:55:44.218463    9020 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:55:44.228577    9020 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 23:55:44.253863    9020 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0731 23:55:44.253863    9020 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0731 23:55:44.254941    9020 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0731 23:55:44.254941    9020 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0731 23:55:44.254941    9020 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0731 23:55:44.254941    9020 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0731 23:55:44.255002    9020 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0731 23:55:44.255032    9020 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0731 23:55:44.255032    9020 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:55:44.255100    9020 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0731 23:55:44.255214    9020 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0731 23:55:44.255273    9020 docker.go:615] Images already preloaded, skipping extraction
	I0731 23:55:44.266878    9020 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 23:55:44.291889    9020 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0731 23:55:44.291889    9020 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0731 23:55:44.292521    9020 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:55:44.292521    9020 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0731 23:55:44.292632    9020 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0731 23:55:44.292632    9020 cache_images.go:84] Images are preloaded, skipping loading
	I0731 23:55:44.292632    9020 kubeadm.go:934] updating node { 172.17.27.27 8443 v1.30.3 docker true true} ...
	I0731 23:55:44.292632    9020 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-411400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.27.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:55:44.302505    9020 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 23:55:44.368959    9020 command_runner.go:130] > cgroupfs
	I0731 23:55:44.369197    9020 cni.go:84] Creating CNI manager for ""
	I0731 23:55:44.369303    9020 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:55:44.369303    9020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:55:44.369383    9020 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.27.27 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-411400 NodeName:multinode-411400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.27.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.27.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 23:55:44.369690    9020 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.27.27
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-411400"
	  kubeletExtraArgs:
	    node-ip: 172.17.27.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.27.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:55:44.381566    9020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:55:44.399701    9020 command_runner.go:130] > kubeadm
	I0731 23:55:44.400453    9020 command_runner.go:130] > kubectl
	I0731 23:55:44.400453    9020 command_runner.go:130] > kubelet
	I0731 23:55:44.400524    9020 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:55:44.414990    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:55:44.433566    9020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0731 23:55:44.464697    9020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:55:44.492119    9020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0731 23:55:44.536267    9020 ssh_runner.go:195] Run: grep 172.17.27.27	control-plane.minikube.internal$ /etc/hosts
	I0731 23:55:44.542451    9020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.27.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:55:44.573227    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:44.751717    9020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:55:44.777678    9020 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400 for IP: 172.17.27.27
	I0731 23:55:44.777678    9020 certs.go:194] generating shared ca certs ...
	I0731 23:55:44.777678    9020 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:44.778450    9020 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 23:55:44.778975    9020 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 23:55:44.779188    9020 certs.go:256] generating profile certs ...
	I0731 23:55:44.780202    9020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\client.key
	I0731 23:55:44.780365    9020 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.08a904b3
	I0731 23:55:44.780516    9020 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.08a904b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.27.27]
	I0731 23:55:45.252832    9020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.08a904b3 ...
	I0731 23:55:45.252832    9020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.08a904b3: {Name:mkdb51b0d280536affe66ab51b6a08832fa60b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:45.254830    9020 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.08a904b3 ...
	I0731 23:55:45.254830    9020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.08a904b3: {Name:mk841b7da4da410d1e8b99278113a65ffb8f6558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:45.255329    9020 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.08a904b3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt
	I0731 23:55:45.269260    9020 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.08a904b3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key
	I0731 23:55:45.271064    9020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key
	I0731 23:55:45.271064    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 23:55:45.271064    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 23:55:45.271064    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 23:55:45.271064    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 23:55:45.271699    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 23:55:45.271769    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 23:55:45.271769    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 23:55:45.272357    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 23:55:45.272357    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 23:55:45.273102    9020 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 23:55:45.273212    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 23:55:45.273302    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 23:55:45.273302    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 23:55:45.273936    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 23:55:45.273936    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 23:55:45.274632    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.274762    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 23:55:45.274762    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.276189    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:55:45.323679    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:55:45.368641    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:55:45.417472    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 23:55:45.458469    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 23:55:45.498699    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 23:55:45.544197    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:55:45.587523    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 23:55:45.635674    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 23:55:45.680937    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 23:55:45.726057    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:55:45.768047    9020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:55:45.815396    9020 ssh_runner.go:195] Run: openssl version
	I0731 23:55:45.823403    9020 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 23:55:45.834951    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:55:45.866428    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.872431    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.872431    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.883641    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.891747    9020 command_runner.go:130] > b5213941
	I0731 23:55:45.903141    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:55:45.933044    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 23:55:45.963641    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.970884    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.971048    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.982335    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.993114    9020 command_runner.go:130] > 51391683
	I0731 23:55:46.005115    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 23:55:46.038924    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 23:55:46.073403    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 23:55:46.079598    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:55:46.079598    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:55:46.091216    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 23:55:46.102142    9020 command_runner.go:130] > 3ec20f2e
	I0731 23:55:46.114740    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:55:46.145487    9020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:55:46.152576    9020 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:55:46.152576    9020 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 23:55:46.152576    9020 command_runner.go:130] > Device: 8,1	Inode: 531538      Links: 1
	I0731 23:55:46.152576    9020 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:55:46.152576    9020 command_runner.go:130] > Access: 2024-07-31 23:32:14.297746386 +0000
	I0731 23:55:46.152576    9020 command_runner.go:130] > Modify: 2024-07-31 23:32:14.297746386 +0000
	I0731 23:55:46.152576    9020 command_runner.go:130] > Change: 2024-07-31 23:32:14.297746386 +0000
	I0731 23:55:46.152576    9020 command_runner.go:130] >  Birth: 2024-07-31 23:32:14.297746386 +0000
	I0731 23:55:46.162891    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 23:55:46.172003    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.183721    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 23:55:46.191931    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.202382    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 23:55:46.211030    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.221288    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 23:55:46.229966    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.242422    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 23:55:46.251654    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.263249    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 23:55:46.271755    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.272350    9020 kubeadm.go:392] StartCluster: {Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.27.27 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:55:46.282008    9020 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 23:55:46.316616    9020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 23:55:46.332873    9020 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0731 23:55:46.332873    9020 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0731 23:55:46.332873    9020 command_runner.go:130] > /var/lib/minikube/etcd:
	I0731 23:55:46.332873    9020 command_runner.go:130] > member
	I0731 23:55:46.332873    9020 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 23:55:46.332873    9020 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 23:55:46.345216    9020 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 23:55:46.362659    9020 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 23:55:46.363845    9020 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-411400" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:55:46.364542    9020 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-411400" cluster setting kubeconfig missing "multinode-411400" context setting]
	I0731 23:55:46.365162    9020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:46.381838    9020 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:55:46.382503    9020 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.27.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:55:46.384012    9020 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 23:55:46.394668    9020 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 23:55:46.411599    9020 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0731 23:55:46.411681    9020 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0731 23:55:46.411681    9020 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0731 23:55:46.411681    9020 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0731 23:55:46.411681    9020 command_runner.go:130] >  kind: InitConfiguration
	I0731 23:55:46.411681    9020 command_runner.go:130] >  localAPIEndpoint:
	I0731 23:55:46.411681    9020 command_runner.go:130] > -  advertiseAddress: 172.17.20.56
	I0731 23:55:46.411681    9020 command_runner.go:130] > +  advertiseAddress: 172.17.27.27
	I0731 23:55:46.411681    9020 command_runner.go:130] >    bindPort: 8443
	I0731 23:55:46.411681    9020 command_runner.go:130] >  bootstrapTokens:
	I0731 23:55:46.411757    9020 command_runner.go:130] >    - groups:
	I0731 23:55:46.411757    9020 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0731 23:55:46.411797    9020 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0731 23:55:46.411797    9020 command_runner.go:130] >    name: "multinode-411400"
	I0731 23:55:46.411797    9020 command_runner.go:130] >    kubeletExtraArgs:
	I0731 23:55:46.411797    9020 command_runner.go:130] > -    node-ip: 172.17.20.56
	I0731 23:55:46.411830    9020 command_runner.go:130] > +    node-ip: 172.17.27.27
	I0731 23:55:46.411830    9020 command_runner.go:130] >    taints: []
	I0731 23:55:46.411830    9020 command_runner.go:130] >  ---
	I0731 23:55:46.411830    9020 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0731 23:55:46.411830    9020 command_runner.go:130] >  kind: ClusterConfiguration
	I0731 23:55:46.411830    9020 command_runner.go:130] >  apiServer:
	I0731 23:55:46.411830    9020 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.20.56"]
	I0731 23:55:46.411830    9020 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.27.27"]
	I0731 23:55:46.411830    9020 command_runner.go:130] >    extraArgs:
	I0731 23:55:46.411830    9020 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0731 23:55:46.411830    9020 command_runner.go:130] >  controllerManager:
	I0731 23:55:46.411830    9020 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.20.56
	+  advertiseAddress: 172.17.27.27
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-411400"
	   kubeletExtraArgs:
	-    node-ip: 172.17.20.56
	+    node-ip: 172.17.27.27
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.20.56"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.27.27"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0731 23:55:46.411830    9020 kubeadm.go:1160] stopping kube-system containers ...
	I0731 23:55:46.421547    9020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 23:55:46.448123    9020 command_runner.go:130] > 378f2a659316
	I0731 23:55:46.448123    9020 command_runner.go:130] > 7a9f5c5f9957
	I0731 23:55:46.448123    9020 command_runner.go:130] > 1d63a0cb77d5
	I0731 23:55:46.448191    9020 command_runner.go:130] > 8da81f74292e
	I0731 23:55:46.448191    9020 command_runner.go:130] > 284902a3378a
	I0731 23:55:46.448191    9020 command_runner.go:130] > 07b42ba54367
	I0731 23:55:46.448191    9020 command_runner.go:130] > 0ae3ab4f2984
	I0731 23:55:46.448191    9020 command_runner.go:130] > 7c2aeeb2eba1
	I0731 23:55:46.448191    9020 command_runner.go:130] > 534fd9010fca
	I0731 23:55:46.448191    9020 command_runner.go:130] > 945a9963cd1c
	I0731 23:55:46.448277    9020 command_runner.go:130] > 54a3651cfe8b
	I0731 23:55:46.448277    9020 command_runner.go:130] > 6ce3944d7d13
	I0731 23:55:46.448277    9020 command_runner.go:130] > 78312ba260a7
	I0731 23:55:46.448277    9020 command_runner.go:130] > 785da79d42d7
	I0731 23:55:46.448277    9020 command_runner.go:130] > 74068ed5155b
	I0731 23:55:46.448330    9020 command_runner.go:130] > 68e7a182b5fc
	I0731 23:55:46.448355    9020 docker.go:483] Stopping containers: [378f2a659316 7a9f5c5f9957 1d63a0cb77d5 8da81f74292e 284902a3378a 07b42ba54367 0ae3ab4f2984 7c2aeeb2eba1 534fd9010fca 945a9963cd1c 54a3651cfe8b 6ce3944d7d13 78312ba260a7 785da79d42d7 74068ed5155b 68e7a182b5fc]
	I0731 23:55:46.457162    9020 ssh_runner.go:195] Run: docker stop 378f2a659316 7a9f5c5f9957 1d63a0cb77d5 8da81f74292e 284902a3378a 07b42ba54367 0ae3ab4f2984 7c2aeeb2eba1 534fd9010fca 945a9963cd1c 54a3651cfe8b 6ce3944d7d13 78312ba260a7 785da79d42d7 74068ed5155b 68e7a182b5fc
	I0731 23:55:46.480184    9020 command_runner.go:130] > 378f2a659316
	I0731 23:55:46.480184    9020 command_runner.go:130] > 7a9f5c5f9957
	I0731 23:55:46.480184    9020 command_runner.go:130] > 1d63a0cb77d5
	I0731 23:55:46.480184    9020 command_runner.go:130] > 8da81f74292e
	I0731 23:55:46.480184    9020 command_runner.go:130] > 284902a3378a
	I0731 23:55:46.480184    9020 command_runner.go:130] > 07b42ba54367
	I0731 23:55:46.480184    9020 command_runner.go:130] > 0ae3ab4f2984
	I0731 23:55:46.480184    9020 command_runner.go:130] > 7c2aeeb2eba1
	I0731 23:55:46.480184    9020 command_runner.go:130] > 534fd9010fca
	I0731 23:55:46.480285    9020 command_runner.go:130] > 945a9963cd1c
	I0731 23:55:46.480285    9020 command_runner.go:130] > 54a3651cfe8b
	I0731 23:55:46.480285    9020 command_runner.go:130] > 6ce3944d7d13
	I0731 23:55:46.480285    9020 command_runner.go:130] > 78312ba260a7
	I0731 23:55:46.480285    9020 command_runner.go:130] > 785da79d42d7
	I0731 23:55:46.480285    9020 command_runner.go:130] > 74068ed5155b
	I0731 23:55:46.480285    9020 command_runner.go:130] > 68e7a182b5fc
	I0731 23:55:46.491781    9020 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 23:55:46.531053    9020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:55:46.547690    9020 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0731 23:55:46.547690    9020 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0731 23:55:46.547690    9020 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0731 23:55:46.547690    9020 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:55:46.548680    9020 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:55:46.548680    9020 kubeadm.go:157] found existing configuration files:
	
	I0731 23:55:46.559565    9020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:55:46.574187    9020 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:55:46.574187    9020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:55:46.586381    9020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:55:46.610757    9020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:55:46.626460    9020 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:55:46.626902    9020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:55:46.638392    9020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:55:46.664735    9020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:55:46.679802    9020 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:55:46.679802    9020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:55:46.691235    9020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:55:46.718843    9020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:55:46.736165    9020 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:55:46.736932    9020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:55:46.747610    9020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:55:46.776526    9020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 23:55:46.793926    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:47.048376    9020 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 23:55:47.049192    9020 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0731 23:55:47.049395    9020 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0731 23:55:47.049643    9020 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 23:55:47.050392    9020 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0731 23:55:47.051417    9020 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0731 23:55:47.058267    9020 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0731 23:55:47.059354    9020 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0731 23:55:47.059576    9020 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0731 23:55:47.060584    9020 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 23:55:47.060584    9020 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 23:55:47.061813    9020 command_runner.go:130] > [certs] Using the existing "sa" key
	I0731 23:55:47.064425    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:48.464886    9020 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 23:55:48.465482    9020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4009568s)
	I0731 23:55:48.465546    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:48.734790    9020 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:55:48.734790    9020 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:55:48.734893    9020 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 23:55:48.734950    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:48.838950    9020 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 23:55:48.839067    9020 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 23:55:48.839067    9020 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 23:55:48.839067    9020 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 23:55:48.839130    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:48.936613    9020 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 23:55:48.936838    9020 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:55:48.948263    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:49.458718    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:49.961890    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:50.453667    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:50.960826    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:50.985141    9020 command_runner.go:130] > 1911
	I0731 23:55:50.985141    9020 api_server.go:72] duration metric: took 2.0483425s to wait for apiserver process to appear ...
	I0731 23:55:50.985141    9020 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:55:50.985141    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:54.018852    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 23:55:54.018852    9020 api_server.go:103] status: https://172.17.27.27:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 23:55:54.018852    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:54.129670    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 23:55:54.129741    9020 api_server.go:103] status: https://172.17.27.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 23:55:54.491399    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:54.499039    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 23:55:54.499370    9020 api_server.go:103] status: https://172.17.27.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 23:55:54.990921    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:55.008112    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 23:55:55.008112    9020 api_server.go:103] status: https://172.17.27.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 23:55:55.500426    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:55.511907    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 200:
	ok
	I0731 23:55:55.511907    9020 round_trippers.go:463] GET https://172.17.27.27:8443/version
	I0731 23:55:55.511907    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:55.511907    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:55.511907    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:55.532481    9020 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0731 23:55:55.532481    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Audit-Id: c00843c2-f504-4bc6-8632-e4c4028c65d5
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:55.532481    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:55.532481    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Content-Length: 263
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:55 GMT
	I0731 23:55:55.532481    9020 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0731 23:55:55.533583    9020 api_server.go:141] control plane version: v1.30.3
	I0731 23:55:55.533729    9020 api_server.go:131] duration metric: took 4.5485302s to wait for apiserver health ...
	I0731 23:55:55.533729    9020 cni.go:84] Creating CNI manager for ""
	I0731 23:55:55.533801    9020 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:55:55.539946    9020 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 23:55:55.555310    9020 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 23:55:55.562316    9020 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0731 23:55:55.562316    9020 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0731 23:55:55.562369    9020 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0731 23:55:55.562369    9020 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:55:55.562369    9020 command_runner.go:130] > Access: 2024-07-31 23:54:24.316709300 +0000
	I0731 23:55:55.562369    9020 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0731 23:55:55.562369    9020 command_runner.go:130] > Change: 2024-07-31 23:54:16.502000000 +0000
	I0731 23:55:55.562369    9020 command_runner.go:130] >  Birth: -
	I0731 23:55:55.562369    9020 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 23:55:55.562505    9020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 23:55:55.612607    9020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 23:55:56.954646    9020 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0731 23:55:56.955684    9020 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0731 23:55:56.955742    9020 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0731 23:55:56.955742    9020 command_runner.go:130] > daemonset.apps/kindnet configured
	I0731 23:55:56.955796    9020 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3431726s)
	I0731 23:55:56.955935    9020 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:55:56.956121    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:55:56.956185    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:56.956205    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:56.956205    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:56.962546    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:55:56.963304    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:56.963304    9020 round_trippers.go:580]     Audit-Id: e6998b80-b52b-4891-850f-f790a75abcae
	I0731 23:55:56.963304    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:56.963304    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:56.963304    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:56.963304    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:56.963304    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:56 GMT
	I0731 23:55:56.965003    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1846"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87652 chars]
	I0731 23:55:56.972636    9020 system_pods.go:59] 12 kube-system pods found
	I0731 23:55:56.972703    9020 system_pods.go:61] "coredns-7db6d8ff4d-z8gtw" [41ddb3a7-8405-49e7-88fb-41ab6278e4af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 23:55:56.972703    9020 system_pods.go:61] "etcd-multinode-411400" [4de1ad7a-3a8e-4823-9430-fadd76753763] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kindnet-bgnqq" [7bb015d3-5a3f-4be8-861c-b29fb76da15c] Running
	I0731 23:55:56.972703    9020 system_pods.go:61] "kindnet-cxs2b" [04d92937-d48a-4a21-b4ce-adb78d3cad7f] Running
	I0731 23:55:56.972703    9020 system_pods.go:61] "kindnet-j8slc" [d77d4517-d9d3-46d9-a231-1496684afe1d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-apiserver-multinode-411400" [eaabee4a-7fb0-455f-b354-3fae71ca2878] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-controller-manager-multinode-411400" [217a4087-49b2-4b74-a094-e027a51cf503] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-proxy-5j8pv" [761c8479-d25f-4142-93b6-23b0d1e3ccb7] Running
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-proxy-chdxg" [f3405391-f4cb-4ffe-8d51-d669e37d0a3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-proxy-g7tpl" [c8356e2e-b324-4001-9b82-18a13b436517] Running
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-scheduler-multinode-411400" [a10cf66c-3049-48d4-9ab1-8667efc59977] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 23:55:56.972703    9020 system_pods.go:61] "storage-provisioner" [f33ea8e6-6b88-471e-a471-d3c4faf9de93] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:55:56.972703    9020 system_pods.go:74] duration metric: took 16.7677ms to wait for pod list to return data ...
	I0731 23:55:56.972703    9020 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:55:56.972703    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes
	I0731 23:55:56.972703    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:56.972703    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:56.972703    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:56.977871    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:55:56.977871    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:56.977871    9020 round_trippers.go:580]     Audit-Id: 8003ed79-6fff-43da-9002-2ba89eb97101
	I0731 23:55:56.977871    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:56.977871    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:56.977871    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:56.977871    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:56.977871    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:56.977871    9020 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1846"},"items":[{"metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15625 chars]
	I0731 23:55:56.979881    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:55:56.979946    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:55:56.980000    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:55:56.980000    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:55:56.980000    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:55:56.980000    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:55:56.980067    9020 node_conditions.go:105] duration metric: took 7.3645ms to run NodePressure ...
	I0731 23:55:56.980067    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:57.330938    9020 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0731 23:55:57.330938    9020 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0731 23:55:57.330938    9020 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 23:55:57.330938    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0731 23:55:57.330938    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.330938    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.330938    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.337984    9020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 23:55:57.337984    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.337984    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.337984    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.337984    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.337984    9020 round_trippers.go:580]     Audit-Id: bb264a15-b455-4f27-a94c-ba5283e93d78
	I0731 23:55:57.337984    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.337984    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.339072    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1850"},"items":[{"metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"4de1ad7a-3a8e-4823-9430-fadd76753763","resourceVersion":"1780","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.27.27:2379","kubernetes.io/config.hash":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.mirror":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.seen":"2024-07-31T23:55:48.969840438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30501 chars]
	I0731 23:55:57.340757    9020 kubeadm.go:739] kubelet initialised
	I0731 23:55:57.340757    9020 kubeadm.go:740] duration metric: took 9.8193ms waiting for restarted kubelet to initialise ...
	I0731 23:55:57.340757    9020 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:55:57.340757    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:55:57.340757    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.340757    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.340757    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.362391    9020 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0731 23:55:57.362631    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.362631    9020 round_trippers.go:580]     Audit-Id: 0aba9c1c-8d0e-4efa-a86e-0d6848811039
	I0731 23:55:57.362631    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.362631    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.362631    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.362631    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.362631    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.364239    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1850"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87652 chars]
	I0731 23:55:57.368173    9020 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.368805    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:55:57.368896    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.368959    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.368959    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.400616    9020 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0731 23:55:57.401647    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.401647    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.401647    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.401647    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.401768    9020 round_trippers.go:580]     Audit-Id: f0288ba7-5f32-43a6-a9df-f70ff282ee55
	I0731 23:55:57.401768    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.401768    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.402052    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:55:57.402672    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:57.402728    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.402728    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.402728    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.405654    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:55:57.405654    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.405654    9020 round_trippers.go:580]     Audit-Id: e350ea03-0dfd-48b8-96e1-4e257768e241
	I0731 23:55:57.405654    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.405654    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.405654    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.405654    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.405654    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.406126    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:57.407384    9020 pod_ready.go:97] node "multinode-411400" hosting pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.407384    9020 pod_ready.go:81] duration metric: took 39.2103ms for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.407384    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.407384    9020 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.408358    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-411400
	I0731 23:55:57.410392    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.410392    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.410392    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.426418    9020 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 23:55:57.426418    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.426418    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.426418    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.426418    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.426418    9020 round_trippers.go:580]     Audit-Id: 15b2c0c2-c127-47d7-99a5-8adac4a00679
	I0731 23:55:57.426418    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.426418    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.426418    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"4de1ad7a-3a8e-4823-9430-fadd76753763","resourceVersion":"1780","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.27.27:2379","kubernetes.io/config.hash":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.mirror":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.seen":"2024-07-31T23:55:48.969840438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6373 chars]
	I0731 23:55:57.426418    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:57.426418    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.426418    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.426418    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.443381    9020 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 23:55:57.443381    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.443381    9020 round_trippers.go:580]     Audit-Id: 84d95dd4-b165-435a-93f2-6132f5b097e7
	I0731 23:55:57.443381    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.443381    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.443381    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.443381    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.443381    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.444373    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:57.444373    9020 pod_ready.go:97] node "multinode-411400" hosting pod "etcd-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.444373    9020 pod_ready.go:81] duration metric: took 36.9886ms for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.444373    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "etcd-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.444373    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.444373    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-411400
	I0731 23:55:57.444373    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.444373    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.444373    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.450360    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:55:57.450360    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.450360    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.450360    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.450360    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.450360    9020 round_trippers.go:580]     Audit-Id: 6fe51bd8-edad-4a6f-b118-53f72637772d
	I0731 23:55:57.450360    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.450360    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.451500    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-411400","namespace":"kube-system","uid":"eaabee4a-7fb0-455f-b354-3fae71ca2878","resourceVersion":"1779","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.27.27:8443","kubernetes.io/config.hash":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.mirror":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.seen":"2024-07-31T23:55:48.898321781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7929 chars]
	I0731 23:55:57.452226    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:57.452226    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.452226    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.452226    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.488708    9020 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0731 23:55:57.488708    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.488789    9020 round_trippers.go:580]     Audit-Id: 69c2cc17-db85-4e57-9eeb-1388b11ffeeb
	I0731 23:55:57.488789    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.488789    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.488789    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.488789    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.488789    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.488864    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:57.489396    9020 pod_ready.go:97] node "multinode-411400" hosting pod "kube-apiserver-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.489451    9020 pod_ready.go:81] duration metric: took 45.0771ms for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.489451    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "kube-apiserver-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.489451    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.489629    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400
	I0731 23:55:57.489629    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.489683    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.489683    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.499445    9020 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 23:55:57.499445    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.499445    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.499445    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.499445    9020 round_trippers.go:580]     Audit-Id: 90f03f14-89dd-405c-bbb9-6fa0cc871e51
	I0731 23:55:57.499445    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.499445    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.499445    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.499445    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-411400","namespace":"kube-system","uid":"217a4087-49b2-4b74-a094-e027a51cf503","resourceVersion":"1777","creationTimestamp":"2024-07-31T23:32:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.mirror":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.seen":"2024-07-31T23:32:18.716560513Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0731 23:55:57.500460    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:57.500460    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.500460    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.500460    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.502440    9020 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 23:55:57.503438    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.503488    9020 round_trippers.go:580]     Audit-Id: 0c11bfef-0f5b-4636-a368-d099a5594715
	I0731 23:55:57.503488    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.503488    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.503488    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.503488    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.503488    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.503648    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:57.504103    9020 pod_ready.go:97] node "multinode-411400" hosting pod "kube-controller-manager-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.504174    9020 pod_ready.go:81] duration metric: took 14.7231ms for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.504174    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "kube-controller-manager-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.504247    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.556649    9020 request.go:629] Waited for 52.1154ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:55:57.556687    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:55:57.556687    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.556687    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.556687    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.559963    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:55:57.560028    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.560028    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.560028    9020 round_trippers.go:580]     Audit-Id: 8cfd7098-49f3-4c48-a067-9e55915554af
	I0731 23:55:57.560028    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.560028    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.560028    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.560028    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.560150    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5j8pv","generateName":"kube-proxy-","namespace":"kube-system","uid":"761c8479-d25f-4142-93b6-23b0d1e3ccb7","resourceVersion":"1748","creationTimestamp":"2024-07-31T23:40:31Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0731 23:55:57.761211    9020 request.go:629] Waited for 200.0921ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:55:57.761583    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:55:57.761668    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.761722    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.761722    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.765024    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:55:57.765248    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.765248    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.765248    9020 round_trippers.go:580]     Audit-Id: 38420fbb-c1d1-4d97-ba70-d1c98e7e988e
	I0731 23:55:57.765248    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.765248    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.765248    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.765248    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.766126    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m03","uid":"3753504a-97f6-4be0-809b-ee84cbf38121","resourceVersion":"1757","creationTimestamp":"2024-07-31T23:51:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_51_16_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:51:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0731 23:55:57.766319    9020 pod_ready.go:97] node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:55:57.766319    9020 pod_ready.go:81] duration metric: took 262.0682ms for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.766319    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:55:57.766319    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.961983    9020 request.go:629] Waited for 195.4848ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:55:57.962089    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:55:57.962089    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.962089    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.962089    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.968955    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:55:57.969130    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.969130    9020 round_trippers.go:580]     Audit-Id: fcfae0dc-0f18-4fc1-9d0a-eadad8e818de
	I0731 23:55:57.969130    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.969130    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.969196    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.969196    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.969196    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.969935    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-chdxg","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3405391-f4cb-4ffe-8d51-d669e37d0a3b","resourceVersion":"1853","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0731 23:55:58.165387    9020 request.go:629] Waited for 194.2487ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:58.165387    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:58.165387    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.165387    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.165775    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.169894    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:55:58.169894    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.169972    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.169972    9020 round_trippers.go:580]     Audit-Id: 1a0b59b8-e5fe-47e2-b3b6-64c8902c7948
	I0731 23:55:58.169972    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.169972    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.169972    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.169972    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.169972    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:58.170723    9020 pod_ready.go:97] node "multinode-411400" hosting pod "kube-proxy-chdxg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:58.170723    9020 pod_ready.go:81] duration metric: took 404.3989ms for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:58.170846    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "kube-proxy-chdxg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:58.170846    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:58.365181    9020 request.go:629] Waited for 194.0071ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:55:58.365294    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:55:58.365294    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.365294    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.365294    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.368229    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:55:58.369227    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.369285    9020 round_trippers.go:580]     Audit-Id: 5fc054b8-090a-4c44-a299-b678cf84b9f3
	I0731 23:55:58.369285    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.369285    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.369285    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.369285    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.369285    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.369285    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g7tpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"c8356e2e-b324-4001-9b82-18a13b436517","resourceVersion":"610","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0731 23:55:58.569407    9020 request.go:629] Waited for 199.2126ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:55:58.569757    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:55:58.569757    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.569757    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.569757    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.572483    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:55:58.572483    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.572483    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.572483    9020 round_trippers.go:580]     Audit-Id: cc0f5b0d-c686-4c59-bf1e-38bcafa2211d
	I0731 23:55:58.573035    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.573035    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.573035    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.573035    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.573197    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"1679","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3825 chars]
	I0731 23:55:58.574152    9020 pod_ready.go:92] pod "kube-proxy-g7tpl" in "kube-system" namespace has status "Ready":"True"
	I0731 23:55:58.574233    9020 pod_ready.go:81] duration metric: took 403.3815ms for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:58.574233    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:58.756510    9020 request.go:629] Waited for 182.0789ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:55:58.756605    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:55:58.756605    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.756730    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.756822    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.759135    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:55:58.759135    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.759135    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.759135    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.759135    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.759135    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.759135    9020 round_trippers.go:580]     Audit-Id: 69ffc305-0ff3-488e-b8d6-30473cd7afa0
	I0731 23:55:58.760032    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.760070    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-411400","namespace":"kube-system","uid":"a10cf66c-3049-48d4-9ab1-8667efc59977","resourceVersion":"1778","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.mirror":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.seen":"2024-07-31T23:32:26.731395457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0731 23:55:58.958678    9020 request.go:629] Waited for 197.4273ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:58.958896    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:58.959053    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.959108    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.959108    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.962375    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:55:58.962375    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.962375    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.962375    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.962375    9020 round_trippers.go:580]     Audit-Id: 0c27f944-9a4f-4a69-976c-2d9a07b34440
	I0731 23:55:58.962375    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.962375    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.962375    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.963356    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:58.963745    9020 pod_ready.go:97] node "multinode-411400" hosting pod "kube-scheduler-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:58.963825    9020 pod_ready.go:81] duration metric: took 389.5877ms for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:58.963825    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "kube-scheduler-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:58.963925    9020 pod_ready.go:38] duration metric: took 1.6230476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:55:58.963925    9020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 23:55:58.992745    9020 command_runner.go:130] > -16
	I0731 23:55:58.992840    9020 ops.go:34] apiserver oom_adj: -16
	I0731 23:55:58.992840    9020 kubeadm.go:597] duration metric: took 12.6598063s to restartPrimaryControlPlane
	I0731 23:55:58.992840    9020 kubeadm.go:394] duration metric: took 12.7203289s to StartCluster
	I0731 23:55:58.992903    9020 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:58.993042    9020 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:55:58.994152    9020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:58.996303    9020 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.27.27 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 23:55:58.996241    9020 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 23:55:58.996846    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:55:59.003558    9020 out.go:177] * Verifying Kubernetes components...
	I0731 23:55:59.010577    9020 out.go:177] * Enabled addons: 
	I0731 23:55:59.018254    9020 addons.go:510] duration metric: took 22.0124ms for enable addons: enabled=[]
	I0731 23:55:59.024220    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:59.317039    9020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:55:59.343141    9020 node_ready.go:35] waiting up to 6m0s for node "multinode-411400" to be "Ready" ...
	I0731 23:55:59.343478    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:59.343478    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:59.343554    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:59.343554    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:59.348356    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:55:59.348356    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:59.348356    9020 round_trippers.go:580]     Audit-Id: 1f9449f5-e768-4963-9eef-c77267852ca2
	I0731 23:55:59.348356    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:59.348356    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:59.348356    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:59.348356    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:59.348356    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:59 GMT
	I0731 23:55:59.348356    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:59.853348    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:59.853520    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:59.853520    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:59.853520    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:59.856874    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:55:59.856874    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:59.856874    9020 round_trippers.go:580]     Audit-Id: 022a6f7c-ac57-4ec7-ae9e-1982affd4ff9
	I0731 23:55:59.856874    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:59.856874    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:59.856874    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:59.856874    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:59.856874    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:59 GMT
	I0731 23:55:59.858147    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:00.355395    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:00.355489    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:00.355489    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:00.355489    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:00.358438    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:00.359470    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:00.359470    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:00 GMT
	I0731 23:56:00.359470    9020 round_trippers.go:580]     Audit-Id: 72707bce-9584-49d8-91cf-9c7fe5abf303
	I0731 23:56:00.359470    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:00.359470    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:00.359470    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:00.359470    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:00.360048    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:00.851828    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:00.851910    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:00.851910    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:00.851910    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:00.856246    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:00.856246    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:00.856246    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:00.856246    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:00.856246    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:00.856246    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:00.856461    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:00 GMT
	I0731 23:56:00.856461    9020 round_trippers.go:580]     Audit-Id: b587087e-9f30-41ae-9fb8-a8744eb0707e
	I0731 23:56:00.856945    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:01.353979    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:01.353979    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:01.353979    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:01.353979    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:01.356971    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:01.356971    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:01.356971    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:01.356971    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:01 GMT
	I0731 23:56:01.356971    9020 round_trippers.go:580]     Audit-Id: 289bbf9a-3594-4324-96a1-b8ea9fe5565d
	I0731 23:56:01.356971    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:01.356971    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:01.356971    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:01.358075    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:01.358619    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:01.854476    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:01.854476    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:01.854476    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:01.854476    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:01.870318    9020 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 23:56:01.870596    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:01.870596    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:01.870596    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:01.870596    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:01 GMT
	I0731 23:56:01.870596    9020 round_trippers.go:580]     Audit-Id: 14def455-3971-425e-a526-b0b5c7662d5b
	I0731 23:56:01.870685    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:01.870685    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:01.871078    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:02.357588    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:02.357775    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:02.357775    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:02.357775    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:02.361299    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:02.361299    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:02.361299    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:02.361299    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:02.361299    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:02.361299    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:02 GMT
	I0731 23:56:02.361299    9020 round_trippers.go:580]     Audit-Id: 7a073046-28e0-4d2b-8423-ad2c1428df82
	I0731 23:56:02.361299    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:02.361977    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:02.844662    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:02.844891    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:02.844891    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:02.844891    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:02.849611    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:02.849611    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:02.849611    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:02.849611    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:02.849611    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:02 GMT
	I0731 23:56:02.849611    9020 round_trippers.go:580]     Audit-Id: 44abbc17-1a5c-4f87-a835-2fcb1b4cadae
	I0731 23:56:02.849611    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:02.849611    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:02.850192    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:03.348381    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:03.348381    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:03.348463    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:03.348463    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:03.351729    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:03.352445    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:03.352445    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:03 GMT
	I0731 23:56:03.352445    9020 round_trippers.go:580]     Audit-Id: 73bf4412-1b44-4c22-b52e-7b2c179f97ae
	I0731 23:56:03.352445    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:03.352445    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:03.352445    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:03.352507    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:03.352507    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:03.849381    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:03.849490    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:03.849490    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:03.849490    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:03.852799    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:03.852799    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:03.852799    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:03 GMT
	I0731 23:56:03.852799    9020 round_trippers.go:580]     Audit-Id: d070cd40-4a0a-496f-91f1-419d00deb350
	I0731 23:56:03.852799    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:03.852799    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:03.852799    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:03.853519    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:03.853836    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:03.854341    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:04.348106    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:04.348106    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:04.348106    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:04.348106    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:04.354338    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:04.354433    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:04.354433    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:04.354433    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:04 GMT
	I0731 23:56:04.354503    9020 round_trippers.go:580]     Audit-Id: a207e2c2-5a7b-42c6-8cf4-56885b58d44e
	I0731 23:56:04.354503    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:04.354503    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:04.354544    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:04.354934    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:04.846436    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:04.846436    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:04.846436    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:04.846436    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:04.850822    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:04.850852    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:04.850852    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:04.850852    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:04.850852    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:04.850852    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:04.850852    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:04 GMT
	I0731 23:56:04.850852    9020 round_trippers.go:580]     Audit-Id: c860df3b-7dc7-4e48-942f-675376dbb963
	I0731 23:56:04.850852    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:05.357823    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:05.358216    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:05.358263    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:05.358263    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:05.362670    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:05.362670    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:05.362670    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:05 GMT
	I0731 23:56:05.362670    9020 round_trippers.go:580]     Audit-Id: 45c1f0c0-15a2-4d08-9cc7-8c0eb8b59e36
	I0731 23:56:05.362670    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:05.362670    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:05.363068    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:05.363068    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:05.363296    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:05.844444    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:05.844444    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:05.844444    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:05.844444    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:05.848065    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:05.848065    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:05.848065    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:05.848065    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:05.848065    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:05 GMT
	I0731 23:56:05.848065    9020 round_trippers.go:580]     Audit-Id: b12bfdde-cb19-4c09-8e51-b56b3b0255c1
	I0731 23:56:05.848065    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:05.848065    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:05.848937    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:06.357894    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:06.357972    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:06.357972    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:06.357972    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:06.361366    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:06.361366    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:06.361366    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:06 GMT
	I0731 23:56:06.361366    9020 round_trippers.go:580]     Audit-Id: 1cc6e887-1033-4265-9a2b-60fe3ff4763d
	I0731 23:56:06.361366    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:06.361366    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:06.361366    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:06.361366    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:06.362185    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:06.362641    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:06.846835    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:06.846835    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:06.846835    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:06.846835    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:06.849455    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:06.849696    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:06.849696    9020 round_trippers.go:580]     Audit-Id: 14cc98cf-eb34-4c03-8163-2b681a72dccb
	I0731 23:56:06.849696    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:06.849696    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:06.849696    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:06.849771    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:06.849771    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:06 GMT
	I0731 23:56:06.849927    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:07.346851    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:07.346851    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:07.346851    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:07.346851    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:07.350498    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:07.351002    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:07.351002    9020 round_trippers.go:580]     Audit-Id: 377c342d-ffec-45c3-a054-b9e9b868239c
	I0731 23:56:07.351002    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:07.351002    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:07.351002    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:07.351002    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:07.351002    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:07 GMT
	I0731 23:56:07.351002    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:07.844063    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:07.844063    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:07.844194    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:07.844194    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:07.848097    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:07.848577    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:07.848577    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:07.848577    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:07 GMT
	I0731 23:56:07.848577    9020 round_trippers.go:580]     Audit-Id: 09c14c31-88ef-47b9-b2cc-0aa755125a58
	I0731 23:56:07.848577    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:07.848577    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:07.848577    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:07.848577    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:08.345300    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:08.345642    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:08.345642    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:08.345690    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:08.348292    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:08.348350    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:08.348350    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:08.348350    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:08.348350    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:08 GMT
	I0731 23:56:08.348350    9020 round_trippers.go:580]     Audit-Id: 370ef678-fd9a-48cd-a621-45fc655457e1
	I0731 23:56:08.348350    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:08.348350    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:08.349055    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:08.845492    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:08.845732    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:08.845732    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:08.845732    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:08.850942    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:08.850942    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:08.850942    9020 round_trippers.go:580]     Audit-Id: a589f167-1012-4e2f-8101-eb0bcef9664b
	I0731 23:56:08.850942    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:08.850942    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:08.850942    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:08.850942    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:08.850942    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:08 GMT
	I0731 23:56:08.851520    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:08.851697    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:09.348313    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:09.348313    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:09.348313    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:09.348313    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:09.352817    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:09.352817    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:09.353224    9020 round_trippers.go:580]     Audit-Id: f7e3e44c-c69a-43c9-a50a-12b0785f5021
	I0731 23:56:09.353224    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:09.353224    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:09.353224    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:09.353224    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:09.353224    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:09 GMT
	I0731 23:56:09.353465    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:09.846437    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:09.846662    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:09.846662    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:09.846662    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:09.849976    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:09.850626    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:09.850626    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:09.850626    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:09 GMT
	I0731 23:56:09.850626    9020 round_trippers.go:580]     Audit-Id: b93926cc-7b26-4a56-96cd-ce345789698f
	I0731 23:56:09.850626    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:09.850626    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:09.850626    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:09.850705    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:10.347116    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:10.347116    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:10.347116    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:10.347116    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:10.350409    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:10.351463    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:10.351463    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:10.351463    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:10.351463    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:10.351463    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:10 GMT
	I0731 23:56:10.351463    9020 round_trippers.go:580]     Audit-Id: 2c6282cb-8c2c-4b0f-a8d5-d3245bff50a3
	I0731 23:56:10.351539    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:10.351737    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:10.845868    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:10.845986    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:10.845986    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:10.846097    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:10.849018    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:10.849018    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:10.849018    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:10.849817    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:10 GMT
	I0731 23:56:10.849817    9020 round_trippers.go:580]     Audit-Id: 85c6a43e-31fb-4cae-aaf2-8db1f6ccdb50
	I0731 23:56:10.849817    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:10.849817    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:10.849817    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:10.850147    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:11.345135    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:11.345233    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:11.345233    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:11.345233    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:11.350275    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:11.350275    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:11.350275    9020 round_trippers.go:580]     Audit-Id: e463648e-0381-493f-bc98-fc3ebc3628fa
	I0731 23:56:11.350389    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:11.350389    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:11.350389    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:11.350389    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:11.350464    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:11 GMT
	I0731 23:56:11.350551    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:11.351051    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:11.846418    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:11.846418    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:11.846548    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:11.846548    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:11.852656    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:11.852698    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:11.852698    9020 round_trippers.go:580]     Audit-Id: c82505ad-292a-472c-aa93-4b38724a3164
	I0731 23:56:11.852698    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:11.852698    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:11.852698    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:11.852698    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:11.852698    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:11 GMT
	I0731 23:56:11.852954    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:12.347010    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:12.347010    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:12.347010    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:12.347010    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:12.350630    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:12.350630    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:12.350630    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:12.350630    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:12.350630    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:12 GMT
	I0731 23:56:12.351399    9020 round_trippers.go:580]     Audit-Id: 8c055061-ffa5-4ba3-a7dc-b399e0d3d6d7
	I0731 23:56:12.351399    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:12.351399    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:12.351566    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:12.847286    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:12.847547    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:12.847547    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:12.847547    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:12.854421    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:12.854421    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:12.854421    9020 round_trippers.go:580]     Audit-Id: 49a504c1-40e8-4f07-aeb1-2231e5695319
	I0731 23:56:12.854421    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:12.854421    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:12.854421    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:12.854421    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:12.854421    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:12 GMT
	I0731 23:56:12.854977    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:13.349390    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:13.349390    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:13.349616    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:13.349616    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:13.352971    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:13.352971    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:13.352971    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:13.353179    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:13 GMT
	I0731 23:56:13.353179    9020 round_trippers.go:580]     Audit-Id: 02c63068-531d-481e-ad20-5894eda7d4b0
	I0731 23:56:13.353179    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:13.353179    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:13.353179    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:13.353382    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:13.353806    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:13.844743    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:13.844743    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:13.844840    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:13.844840    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:13.847817    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:13.847817    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:13.847817    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:13.847817    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:13.847817    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:13.847817    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:13 GMT
	I0731 23:56:13.847817    9020 round_trippers.go:580]     Audit-Id: ad84d325-9653-466e-8e62-3b49d275aa82
	I0731 23:56:13.847817    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:13.848404    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:14.357831    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:14.357831    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.357930    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.357930    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.361330    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:14.361330    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.361330    9020 round_trippers.go:580]     Audit-Id: 574a782e-d523-41ce-80fc-72a8ccb6ee2f
	I0731 23:56:14.361330    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.361330    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.361330    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.361330    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.362342    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.362493    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:14.857183    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:14.857183    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.857183    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.857183    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.859883    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:14.859883    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.859883    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.860277    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.860277    9020 round_trippers.go:580]     Audit-Id: 589bc9d4-8881-47d3-b328-dce17e97c7bd
	I0731 23:56:14.860277    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.860277    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.860277    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.860469    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:14.860742    9020 node_ready.go:49] node "multinode-411400" has status "Ready":"True"
	I0731 23:56:14.860742    9020 node_ready.go:38] duration metric: took 15.5172045s for node "multinode-411400" to be "Ready" ...
	I0731 23:56:14.860742    9020 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:56:14.860742    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:14.860742    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.860742    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.860742    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.873449    9020 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 23:56:14.873449    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.873449    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.873449    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.873449    9020 round_trippers.go:580]     Audit-Id: 37c84592-884c-40eb-808a-09f1df5ebaa6
	I0731 23:56:14.873449    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.873449    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.873449    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.875427    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1899"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86085 chars]
	I0731 23:56:14.880051    9020 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:14.880252    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:14.880252    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.880252    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.880252    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.882842    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:14.882842    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.882842    9020 round_trippers.go:580]     Audit-Id: fe7d7c12-0f8f-42db-8d38-af28b1688ba0
	I0731 23:56:14.882842    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.882842    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.882842    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.882842    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.882842    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.883868    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:14.884402    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:14.884491    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.884491    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.884491    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.886700    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:14.886700    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.886851    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.886851    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.886851    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.886851    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.886851    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.886851    9020 round_trippers.go:580]     Audit-Id: c52c4ee6-0d29-489d-8931-5e01e7eaa20d
	I0731 23:56:14.887121    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:15.388592    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:15.388592    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:15.388592    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:15.388592    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:15.392214    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:15.392214    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:15.392214    9020 round_trippers.go:580]     Audit-Id: cf2e8f89-6d79-4e4d-9730-cc80abd3b699
	I0731 23:56:15.392214    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:15.392214    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:15.392214    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:15.392214    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:15.392301    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:15 GMT
	I0731 23:56:15.392459    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:15.393206    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:15.393206    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:15.393206    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:15.393206    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:15.397902    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:15.397902    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:15.397902    9020 round_trippers.go:580]     Audit-Id: a5493160-594e-456a-a362-2deddff8baaa
	I0731 23:56:15.397902    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:15.397996    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:15.397996    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:15.397996    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:15.397996    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:15 GMT
	I0731 23:56:15.398207    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:15.888425    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:15.888425    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:15.888425    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:15.888611    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:15.892012    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:15.892548    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:15.892548    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:15.892548    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:15.892653    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:15 GMT
	I0731 23:56:15.892653    9020 round_trippers.go:580]     Audit-Id: b2ea96e8-3993-41dc-9c5b-0b06043182d2
	I0731 23:56:15.892653    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:15.892653    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:15.892924    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:15.894160    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:15.894160    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:15.894250    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:15.894250    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:15.896497    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:15.896497    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:15.896497    9020 round_trippers.go:580]     Audit-Id: 43b5ade4-ccb1-42f7-8b0c-95a4cd41f860
	I0731 23:56:15.896497    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:15.896497    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:15.896497    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:15.896497    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:15.897196    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:15 GMT
	I0731 23:56:15.897610    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:16.389073    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:16.389073    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:16.389073    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:16.389151    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:16.392392    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:16.392759    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:16.392759    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:16.392759    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:16.392759    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:16 GMT
	I0731 23:56:16.392759    9020 round_trippers.go:580]     Audit-Id: d2080e00-8048-4613-bba1-b00e8a265f51
	I0731 23:56:16.392759    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:16.392759    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:16.392978    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:16.393250    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:16.393250    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:16.393250    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:16.393250    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:16.396027    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:16.396609    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:16.396609    9020 round_trippers.go:580]     Audit-Id: 10f68a14-0c69-4e8a-9085-ba2c583ac703
	I0731 23:56:16.396609    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:16.396609    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:16.396609    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:16.396661    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:16.396661    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:16 GMT
	I0731 23:56:16.396747    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:16.887141    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:16.887232    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:16.887232    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:16.887232    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:16.890534    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:16.890534    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:16.890534    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:16 GMT
	I0731 23:56:16.890534    9020 round_trippers.go:580]     Audit-Id: 093d44ee-e0bc-492b-a329-4cb19fc26026
	I0731 23:56:16.890534    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:16.890534    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:16.890534    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:16.891436    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:16.891607    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:16.892298    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:16.892373    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:16.892373    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:16.892373    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:16.894612    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:16.894612    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:16.894612    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:16.894612    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:16.895274    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:16.895274    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:16.895274    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:16 GMT
	I0731 23:56:16.895274    9020 round_trippers.go:580]     Audit-Id: 464810d3-b6eb-4abb-b8c2-c0dff0dcd027
	I0731 23:56:16.895901    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:16.896626    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:17.387767    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:17.388055    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:17.388055    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:17.388055    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:17.391012    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:17.391012    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:17.391012    9020 round_trippers.go:580]     Audit-Id: bb69bc17-0d02-44b9-acca-359f08a02fc3
	I0731 23:56:17.391012    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:17.391012    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:17.391012    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:17.391012    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:17.391012    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:17 GMT
	I0731 23:56:17.391877    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:17.393155    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:17.393222    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:17.393222    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:17.393222    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:17.395492    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:17.395492    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:17.395492    9020 round_trippers.go:580]     Audit-Id: 1def7d6e-2944-4db5-8634-58494d036cd8
	I0731 23:56:17.395492    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:17.395492    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:17.395492    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:17.395492    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:17.395492    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:17 GMT
	I0731 23:56:17.396223    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:17.888543    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:17.888833    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:17.888833    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:17.888833    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:17.893151    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:17.893151    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:17.893151    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:17 GMT
	I0731 23:56:17.893151    9020 round_trippers.go:580]     Audit-Id: 46e95556-186f-4868-aec8-b9c80967620e
	I0731 23:56:17.893151    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:17.893151    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:17.893151    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:17.893151    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:17.893622    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:17.894329    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:17.894329    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:17.894329    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:17.894329    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:17.896933    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:17.896933    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:17.896933    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:17.897575    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:17.897575    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:17.897575    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:17 GMT
	I0731 23:56:17.897575    9020 round_trippers.go:580]     Audit-Id: 7df99dd4-90d8-4872-aec7-2fcb578558de
	I0731 23:56:17.897575    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:17.897803    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:18.387607    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:18.387607    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:18.387607    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:18.387607    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:18.394804    9020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 23:56:18.394804    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:18.394804    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:18.394804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:18.394804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:18.394804    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:18 GMT
	I0731 23:56:18.394804    9020 round_trippers.go:580]     Audit-Id: f092b29a-e6cf-4403-ab9c-568c41cf380f
	I0731 23:56:18.394804    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:18.395562    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:18.396439    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:18.396439    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:18.396439    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:18.396439    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:18.398805    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:18.398805    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:18.398805    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:18 GMT
	I0731 23:56:18.398805    9020 round_trippers.go:580]     Audit-Id: 109d4364-a622-4578-972b-3ae52dfed42b
	I0731 23:56:18.398805    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:18.398805    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:18.398805    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:18.398805    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:18.398805    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:18.887062    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:18.887062    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:18.887062    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:18.887062    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:18.890724    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:18.891286    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:18.891286    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:18.891286    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:18.891286    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:18 GMT
	I0731 23:56:18.891286    9020 round_trippers.go:580]     Audit-Id: 30fe43a5-365d-486d-a2b1-59315de60a6f
	I0731 23:56:18.891286    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:18.891286    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:18.892492    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:18.893355    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:18.893355    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:18.893355    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:18.893355    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:18.896317    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:18.896435    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:18.896435    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:18.896435    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:18.896435    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:18 GMT
	I0731 23:56:18.896435    9020 round_trippers.go:580]     Audit-Id: e79c7cbd-4b85-413a-9d72-1663a173cdea
	I0731 23:56:18.896435    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:18.896435    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:18.896435    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:18.897363    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:19.386336    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:19.386336    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:19.386336    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:19.386336    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:19.389062    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:19.389062    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:19.389062    9020 round_trippers.go:580]     Audit-Id: 6e8851cd-feb0-4c3b-a3a7-604ef81e1397
	I0731 23:56:19.389062    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:19.389062    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:19.389062    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:19.389062    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:19.389062    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:19 GMT
	I0731 23:56:19.390420    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:19.391179    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:19.391268    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:19.391268    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:19.391392    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:19.396406    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:19.396406    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:19.396406    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:19.396406    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:19 GMT
	I0731 23:56:19.396406    9020 round_trippers.go:580]     Audit-Id: 04a5da4b-5f0f-443b-a710-c04dcdc0a60c
	I0731 23:56:19.396406    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:19.396406    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:19.396406    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:19.397201    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:19.887227    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:19.887227    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:19.887227    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:19.887373    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:19.890598    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:19.890598    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:19.890598    9020 round_trippers.go:580]     Audit-Id: 9d5e8324-334f-44d5-b655-70d8d527bcfa
	I0731 23:56:19.890598    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:19.890598    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:19.890598    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:19.890598    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:19.891031    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:19 GMT
	I0731 23:56:19.891389    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:19.892130    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:19.892183    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:19.892183    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:19.892183    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:19.898097    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:19.898097    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:19.898097    9020 round_trippers.go:580]     Audit-Id: 0faf23c3-b7b6-4c77-b0b3-f3612378ac5e
	I0731 23:56:19.898097    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:19.898097    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:19.898097    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:19.898097    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:19.898097    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:19 GMT
	I0731 23:56:19.898740    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:20.388688    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:20.388688    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:20.388752    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:20.388752    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:20.393730    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:20.393956    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:20.393956    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:20.393956    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:20.393956    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:20 GMT
	I0731 23:56:20.393956    9020 round_trippers.go:580]     Audit-Id: e479125b-8a09-4b2e-b3a8-d94dc62e98e8
	I0731 23:56:20.394002    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:20.394091    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:20.394119    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:20.394968    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:20.394968    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:20.394968    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:20.394968    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:20.399256    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:20.399256    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:20.399477    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:20.399477    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:20.399477    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:20.399477    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:20.399477    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:20 GMT
	I0731 23:56:20.399477    9020 round_trippers.go:580]     Audit-Id: 0e9d1bff-749b-4cbf-ad89-1c2f062633ab
	I0731 23:56:20.399542    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:20.888613    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:20.888613    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:20.888613    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:20.888613    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:20.892081    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:20.892081    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:20.892081    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:20.892081    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:20 GMT
	I0731 23:56:20.892081    9020 round_trippers.go:580]     Audit-Id: 5bd2d2c0-8daa-46bb-9de2-ae98d0b7c172
	I0731 23:56:20.892081    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:20.892081    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:20.892081    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:20.892859    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:20.893681    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:20.893681    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:20.893681    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:20.893681    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:20.896998    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:20.896998    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:20.896998    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:20 GMT
	I0731 23:56:20.896998    9020 round_trippers.go:580]     Audit-Id: 10b0fe75-34de-4fdf-b8fa-e44a6090ee94
	I0731 23:56:20.896998    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:20.896998    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:20.897105    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:20.897105    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:20.897320    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:20.897714    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:21.387769    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:21.387769    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:21.388239    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:21.388239    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:21.391611    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:21.391611    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:21.391947    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:21.391947    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:21.391947    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:21.391947    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:21.391947    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:21 GMT
	I0731 23:56:21.391947    9020 round_trippers.go:580]     Audit-Id: 64ba2336-f6ff-47bd-86c2-c1afc6db4132
	I0731 23:56:21.392239    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:21.393005    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:21.393005    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:21.393005    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:21.393005    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:21.396585    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:21.396585    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:21.396925    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:21.396925    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:21.396925    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:21 GMT
	I0731 23:56:21.396925    9020 round_trippers.go:580]     Audit-Id: f7dbf91c-fc37-4375-a9d9-767c83f7e370
	I0731 23:56:21.396925    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:21.396925    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:21.396925    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:21.888388    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:21.888467    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:21.888467    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:21.888467    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:21.892747    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:21.893677    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:21.893677    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:21.893737    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:21.893737    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:21.893737    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:21 GMT
	I0731 23:56:21.893737    9020 round_trippers.go:580]     Audit-Id: 521ac87a-69bd-4053-a50a-19b0f547cdd3
	I0731 23:56:21.893737    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:21.893929    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:21.894802    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:21.894964    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:21.894964    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:21.894964    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:21.896781    9020 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 23:56:21.896781    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:21.896781    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:21.897634    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:21.897634    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:21.897634    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:21.897634    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:21 GMT
	I0731 23:56:21.897634    9020 round_trippers.go:580]     Audit-Id: 177885c9-396b-4efc-b5b2-645e49be7083
	I0731 23:56:21.898058    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:22.386386    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:22.386386    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:22.386386    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:22.386386    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:22.389998    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:22.390910    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:22.390910    9020 round_trippers.go:580]     Audit-Id: aef99beb-d04a-423f-a94d-5a5a9db20703
	I0731 23:56:22.390910    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:22.390910    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:22.390910    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:22.391016    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:22.391016    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:22 GMT
	I0731 23:56:22.391253    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:22.392200    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:22.392256    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:22.392256    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:22.392256    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:22.395022    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:22.395679    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:22.395679    9020 round_trippers.go:580]     Audit-Id: 01cdb5bb-f8e8-4e92-9665-bc9d14aed9ec
	I0731 23:56:22.395679    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:22.395679    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:22.395679    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:22.395679    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:22.395679    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:22 GMT
	I0731 23:56:22.395971    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:22.883846    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:22.883911    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:22.883911    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:22.883911    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:22.887935    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:22.887935    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:22.888500    9020 round_trippers.go:580]     Audit-Id: 6f0c593d-51ed-4af5-af59-c8efc6596c3c
	I0731 23:56:22.888500    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:22.888500    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:22.888500    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:22.888500    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:22.888500    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:22 GMT
	I0731 23:56:22.888614    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:22.889329    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:22.889329    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:22.889329    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:22.889329    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:22.893001    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:22.893001    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:22.893001    9020 round_trippers.go:580]     Audit-Id: cdf80642-cf63-4cb8-987a-64b2918d1fdb
	I0731 23:56:22.893001    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:22.893819    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:22.893819    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:22.893819    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:22.893819    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:22 GMT
	I0731 23:56:22.894135    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:23.384893    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:23.384893    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:23.384893    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:23.384893    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:23.388570    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:23.388982    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:23.388982    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:23.388982    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:23 GMT
	I0731 23:56:23.388982    9020 round_trippers.go:580]     Audit-Id: cf0612c6-3569-4bd6-bc70-799d3ee49ace
	I0731 23:56:23.388982    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:23.388982    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:23.388982    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:23.388982    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:23.389967    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:23.389967    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:23.390076    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:23.390076    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:23.392993    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:23.394031    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:23.394031    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:23.394082    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:23 GMT
	I0731 23:56:23.394082    9020 round_trippers.go:580]     Audit-Id: 9b351eaf-4a10-450a-b634-ae42985d693f
	I0731 23:56:23.394082    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:23.394082    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:23.394082    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:23.394082    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:23.394814    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:23.884639    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:23.884639    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:23.884639    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:23.884639    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:23.887694    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:23.887694    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:23.887694    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:23 GMT
	I0731 23:56:23.887694    9020 round_trippers.go:580]     Audit-Id: 93902d49-f1a9-461b-98e5-5356e59b4e0a
	I0731 23:56:23.887694    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:23.887694    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:23.887694    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:23.887694    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:23.888466    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:23.889139    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:23.889139    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:23.889139    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:23.889139    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:23.891848    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:23.891848    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:23.891848    9020 round_trippers.go:580]     Audit-Id: 13c25a5d-0902-44aa-ab2e-d0a968892769
	I0731 23:56:23.891848    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:23.891848    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:23.891848    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:23.891848    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:23.891848    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:23 GMT
	I0731 23:56:23.892598    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:24.384288    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:24.384383    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:24.384383    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:24.384383    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:24.387749    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:24.387749    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:24.387749    9020 round_trippers.go:580]     Audit-Id: ab1c334e-b8e9-4a2e-859c-3a2ad4d86194
	I0731 23:56:24.387749    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:24.387749    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:24.388282    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:24.388282    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:24.388282    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:24 GMT
	I0731 23:56:24.388494    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:24.389002    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:24.389002    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:24.389002    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:24.389002    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:24.391568    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:24.391568    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:24.391568    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:24.391568    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:24.391568    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:24 GMT
	I0731 23:56:24.391568    9020 round_trippers.go:580]     Audit-Id: 5a4fec8e-d1f7-4c71-9124-1d025f1f12b7
	I0731 23:56:24.391568    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:24.391568    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:24.392453    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:24.882755    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:24.882755    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:24.882755    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:24.882755    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:24.885843    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:24.885843    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:24.885843    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:24 GMT
	I0731 23:56:24.886260    9020 round_trippers.go:580]     Audit-Id: ec5a7e9b-3681-414e-ab18-7f987b8cff3c
	I0731 23:56:24.886260    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:24.886260    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:24.886260    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:24.886260    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:24.886455    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:24.887004    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:24.887213    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:24.887213    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:24.887213    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:24.889584    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:24.889584    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:24.889584    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:24.889584    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:24 GMT
	I0731 23:56:24.889584    9020 round_trippers.go:580]     Audit-Id: 6ee97c38-0151-4eb6-8950-d6ed4def04ba
	I0731 23:56:24.889584    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:24.889584    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:24.889584    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:24.890317    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:25.382836    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:25.382836    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:25.382836    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:25.382931    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:25.385510    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:25.385510    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:25.385510    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:25.385510    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:25.385510    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:25 GMT
	I0731 23:56:25.385510    9020 round_trippers.go:580]     Audit-Id: d63181f8-17e6-4ddf-9a96-846d9a786cbd
	I0731 23:56:25.385510    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:25.385510    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:25.386767    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:25.387637    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:25.387637    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:25.387743    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:25.387743    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:25.391732    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:25.391732    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:25.391732    9020 round_trippers.go:580]     Audit-Id: 4b4aba75-923a-4655-b330-901e69a44ebc
	I0731 23:56:25.391732    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:25.391732    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:25.391732    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:25.391732    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:25.391732    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:25 GMT
	I0731 23:56:25.392482    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:25.881384    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:25.881476    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:25.881476    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:25.881476    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:25.885747    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:25.885747    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:25.885747    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:25.885747    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:25.886143    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:25.886143    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:25 GMT
	I0731 23:56:25.886143    9020 round_trippers.go:580]     Audit-Id: 37e984d0-c6e3-40ae-83ec-20ca3a71dd73
	I0731 23:56:25.886143    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:25.886308    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:25.886957    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:25.886957    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:25.886957    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:25.886957    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:25.889589    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:25.889589    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:25.889589    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:25.889589    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:25.889589    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:25 GMT
	I0731 23:56:25.889589    9020 round_trippers.go:580]     Audit-Id: 021a7b58-1509-4010-bfaa-df2824bca110
	I0731 23:56:25.889589    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:25.889589    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:25.890651    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:25.890651    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:26.383836    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:26.384048    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:26.384048    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:26.384048    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:26.390904    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:26.390904    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:26.390904    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:26 GMT
	I0731 23:56:26.390904    9020 round_trippers.go:580]     Audit-Id: 75d02ceb-30ae-41b1-8d46-579a0df43af5
	I0731 23:56:26.390904    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:26.390904    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:26.390904    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:26.390904    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:26.391529    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:26.392340    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:26.392340    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:26.392340    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:26.392340    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:26.394899    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:26.394899    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:26.394899    9020 round_trippers.go:580]     Audit-Id: 26c9bdcc-5634-40df-8d83-295b3e2afa2c
	I0731 23:56:26.395679    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:26.395679    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:26.395679    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:26.395679    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:26.395679    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:26 GMT
	I0731 23:56:26.396027    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:26.883719    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:26.883774    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:26.883824    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:26.883824    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:26.890203    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:26.890203    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:26.890203    9020 round_trippers.go:580]     Audit-Id: 4b90f8cd-5a24-4ae9-916c-50da0cfd1ec6
	I0731 23:56:26.890203    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:26.890203    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:26.890203    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:26.890203    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:26.890203    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:26 GMT
	I0731 23:56:26.896347    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:26.897092    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:26.897147    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:26.897147    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:26.897212    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:26.899919    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:26.900796    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:26.900796    9020 round_trippers.go:580]     Audit-Id: 3bf4b77c-2528-4a14-ae4c-3011248e54cb
	I0731 23:56:26.900796    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:26.900796    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:26.900796    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:26.900796    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:26.900796    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:26 GMT
	I0731 23:56:26.901274    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:27.388511    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:27.388687    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:27.388745    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:27.388745    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:27.395020    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:27.395020    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:27.395020    9020 round_trippers.go:580]     Audit-Id: f8120cc1-d868-4e6b-b3ee-64bc4344d06d
	I0731 23:56:27.395020    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:27.395020    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:27.395020    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:27.395020    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:27.395020    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:27 GMT
	I0731 23:56:27.395020    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:27.395800    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:27.396380    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:27.396380    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:27.396480    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:27.399385    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:27.399385    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:27.399385    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:27.399385    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:27 GMT
	I0731 23:56:27.399385    9020 round_trippers.go:580]     Audit-Id: bbc7fe47-283c-454d-b35f-1bc2fbfc3c43
	I0731 23:56:27.399385    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:27.399385    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:27.399385    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:27.399987    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:27.894470    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:27.894470    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:27.894470    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:27.894470    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:27.901451    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:27.901451    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:27.901451    9020 round_trippers.go:580]     Audit-Id: 16a3de77-1669-416a-acd0-1581f8142e97
	I0731 23:56:27.901451    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:27.901451    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:27.901451    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:27.901451    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:27.901451    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:27 GMT
	I0731 23:56:27.901451    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:27.902471    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:27.902471    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:27.902471    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:27.902471    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:27.918451    9020 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 23:56:27.918451    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:27.918451    9020 round_trippers.go:580]     Audit-Id: 479c258a-cff9-490e-96a4-45b988dcba5a
	I0731 23:56:27.918451    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:27.918451    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:27.918451    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:27.918451    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:27.918451    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:27 GMT
	I0731 23:56:27.918451    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:27.919506    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:28.380993    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:28.381188    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:28.381188    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:28.381188    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:28.384593    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:28.384593    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:28.384593    9020 round_trippers.go:580]     Audit-Id: 653e81d3-4b9b-4807-b572-c2e48bf48a95
	I0731 23:56:28.384593    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:28.384593    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:28.384593    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:28.384593    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:28.384593    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:28 GMT
	I0731 23:56:28.385765    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:28.386500    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:28.386500    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:28.386600    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:28.386600    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:28.389446    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:28.389827    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:28.389827    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:28 GMT
	I0731 23:56:28.389827    9020 round_trippers.go:580]     Audit-Id: 8d67c5f9-338b-49b5-b788-43263125a8cc
	I0731 23:56:28.389827    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:28.389827    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:28.389827    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:28.389827    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:28.390483    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:28.880670    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:28.880670    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:28.880670    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:28.880670    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:28.883325    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:28.883325    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:28.883325    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:28.883325    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:28.883325    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:28.883325    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:28 GMT
	I0731 23:56:28.883325    9020 round_trippers.go:580]     Audit-Id: 9368ece9-1eb4-44b8-b69a-7d5b859df901
	I0731 23:56:28.883325    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:28.884769    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:28.885425    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:28.885425    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:28.885425    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:28.885425    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:28.887977    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:28.887977    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:28.888396    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:28.888396    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:28.888396    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:28.888396    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:28.888396    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:28 GMT
	I0731 23:56:28.888396    9020 round_trippers.go:580]     Audit-Id: bca32cbb-9ed7-4ee3-b4e0-b8c3a6f118fa
	I0731 23:56:28.888700    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.384352    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:29.384478    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.384478    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.384548    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.386841    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.387891    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.387891    9020 round_trippers.go:580]     Audit-Id: 089e5977-4235-4abf-a014-57e1ea51d78a
	I0731 23:56:29.387923    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.387923    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.387923    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.387923    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.387923    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.388088    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0731 23:56:29.389420    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.389420    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.389420    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.389420    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.391804    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.391804    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.391804    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.391804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.391804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.391804    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.391804    9020 round_trippers.go:580]     Audit-Id: f9734abd-3fdd-48dd-80fc-2948a33e4bb5
	I0731 23:56:29.391804    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.392898    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.393554    9020 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.393554    9020 pod_ready.go:81] duration metric: took 14.5132735s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.393554    9020 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.393645    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-411400
	I0731 23:56:29.393717    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.393745    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.393745    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.396382    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.396382    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.396382    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.396630    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.396630    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.396630    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.396630    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.396630    9020 round_trippers.go:580]     Audit-Id: eeb3ab7c-fc43-472d-8204-482d39151085
	I0731 23:56:29.396815    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"4de1ad7a-3a8e-4823-9430-fadd76753763","resourceVersion":"1862","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.27.27:2379","kubernetes.io/config.hash":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.mirror":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.seen":"2024-07-31T23:55:48.969840438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0731 23:56:29.397450    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.397450    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.397450    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.397450    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.401037    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.401287    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.401287    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.401287    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.401287    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.401287    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.401287    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.401287    9020 round_trippers.go:580]     Audit-Id: a6cfbdb4-869d-4aee-941e-7f5b27ec2b3f
	I0731 23:56:29.401459    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.401459    9020 pod_ready.go:92] pod "etcd-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.401459    9020 pod_ready.go:81] duration metric: took 7.9045ms for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.402007    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.402185    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-411400
	I0731 23:56:29.402185    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.402185    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.402185    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.405090    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.405090    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.405090    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.405090    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.405090    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.405090    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.405090    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.405525    9020 round_trippers.go:580]     Audit-Id: 85742812-a408-4011-873d-933b216a699a
	I0731 23:56:29.405794    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-411400","namespace":"kube-system","uid":"eaabee4a-7fb0-455f-b354-3fae71ca2878","resourceVersion":"1864","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.27.27:8443","kubernetes.io/config.hash":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.mirror":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.seen":"2024-07-31T23:55:48.898321781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0731 23:56:29.405794    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.405794    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.406339    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.406339    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.408569    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.408569    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.408569    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.408569    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.408569    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.408569    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.408569    9020 round_trippers.go:580]     Audit-Id: 466479ff-e93d-41d8-b4b8-6e479140ac45
	I0731 23:56:29.408569    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.408569    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.408569    9020 pod_ready.go:92] pod "kube-apiserver-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.408569    9020 pod_ready.go:81] duration metric: took 6.5613ms for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.408569    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.408569    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400
	I0731 23:56:29.408569    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.408569    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.409566    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.412333    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.412333    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.412333    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.412333    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.412333    9020 round_trippers.go:580]     Audit-Id: 7d22dab1-7c02-4590-8454-9d55e54df448
	I0731 23:56:29.412333    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.412333    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.412333    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.413220    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-411400","namespace":"kube-system","uid":"217a4087-49b2-4b74-a094-e027a51cf503","resourceVersion":"1891","creationTimestamp":"2024-07-31T23:32:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.mirror":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.seen":"2024-07-31T23:32:18.716560513Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0731 23:56:29.413899    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.413899    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.413967    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.413967    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.417721    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.417772    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.417787    9020 round_trippers.go:580]     Audit-Id: 9a99f76a-d4eb-40c0-86d4-6d9c6a1fbdb8
	I0731 23:56:29.417787    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.417787    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.417787    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.417787    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.417787    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.417888    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.418753    9020 pod_ready.go:92] pod "kube-controller-manager-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.418753    9020 pod_ready.go:81] duration metric: took 10.1843ms for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.418753    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.418927    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:56:29.418927    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.418927    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.419031    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.422058    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.422058    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.422058    9020 round_trippers.go:580]     Audit-Id: fcd95dbd-a00b-4ef2-92c9-3f98beb27867
	I0731 23:56:29.422058    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.422058    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.422058    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.422058    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.422058    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.422920    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5j8pv","generateName":"kube-proxy-","namespace":"kube-system","uid":"761c8479-d25f-4142-93b6-23b0d1e3ccb7","resourceVersion":"1748","creationTimestamp":"2024-07-31T23:40:31Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0731 23:56:29.422920    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:56:29.422920    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.422920    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.422920    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.425403    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.425403    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.425403    9020 round_trippers.go:580]     Audit-Id: 32637fb7-462a-474d-bfc2-8d1d42a0a168
	I0731 23:56:29.425964    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.425964    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.425964    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.425964    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.425964    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.426030    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m03","uid":"3753504a-97f6-4be0-809b-ee84cbf38121","resourceVersion":"1888","creationTimestamp":"2024-07-31T23:51:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_51_16_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:51:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0731 23:56:29.426668    9020 pod_ready.go:97] node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:56:29.426668    9020 pod_ready.go:81] duration metric: took 7.9147ms for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	E0731 23:56:29.426668    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:56:29.426668    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.586997    9020 request.go:629] Waited for 160.1041ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:56:29.587193    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:56:29.587193    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.587193    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.587193    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.590269    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.590269    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.590269    9020 round_trippers.go:580]     Audit-Id: 02ff3148-d78d-40c2-9478-fa8a33f2ee59
	I0731 23:56:29.590269    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.590269    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.590269    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.590617    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.590617    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.590802    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-chdxg","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3405391-f4cb-4ffe-8d51-d669e37d0a3b","resourceVersion":"1853","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0731 23:56:29.789851    9020 request.go:629] Waited for 198.3558ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.789851    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.789851    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.789851    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.789851    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.792663    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.793431    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.793431    9020 round_trippers.go:580]     Audit-Id: 3e29ee6c-2269-4f65-8bba-b17357f5e4d2
	I0731 23:56:29.793431    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.793431    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.793431    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.793431    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.793431    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.793741    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.794650    9020 pod_ready.go:92] pod "kube-proxy-chdxg" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.794717    9020 pod_ready.go:81] duration metric: took 368.0448ms for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.794717    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.990445    9020 request.go:629] Waited for 195.6325ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:56:29.990574    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:56:29.990773    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.990854    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.990854    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.994168    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.994168    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.994168    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:29.994168    9020 round_trippers.go:580]     Audit-Id: 36360ef5-72e8-4a2d-b9c9-53238bfa3c44
	I0731 23:56:29.994168    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.994168    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.994479    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.994479    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.994759    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g7tpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"c8356e2e-b324-4001-9b82-18a13b436517","resourceVersion":"610","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0731 23:56:30.192779    9020 request.go:629] Waited for 197.1053ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:56:30.192889    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:56:30.192889    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.192889    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.192889    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.196430    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:30.197162    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.197162    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.197162    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.197162    9020 round_trippers.go:580]     Audit-Id: 5138a96d-f031-40b1-8d91-24e4834636d6
	I0731 23:56:30.197162    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.197226    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.197226    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.197226    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"1679","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3825 chars]
	I0731 23:56:30.197947    9020 pod_ready.go:92] pod "kube-proxy-g7tpl" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:30.197947    9020 pod_ready.go:81] duration metric: took 403.2245ms for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:30.197947    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:30.395990    9020 request.go:629] Waited for 198.0404ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:56:30.396531    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:56:30.396531    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.396531    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.396531    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.402405    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:30.402629    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.402629    9020 round_trippers.go:580]     Audit-Id: 40a2f5d7-df03-4dd8-a127-9a76c1cc242e
	I0731 23:56:30.402629    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.402629    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.402629    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.402683    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.402683    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.402820    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-411400","namespace":"kube-system","uid":"a10cf66c-3049-48d4-9ab1-8667efc59977","resourceVersion":"1875","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.mirror":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.seen":"2024-07-31T23:32:26.731395457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0731 23:56:30.598146    9020 request.go:629] Waited for 194.2236ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:30.598146    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:30.598146    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.598146    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.598264    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.612592    9020 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 23:56:30.612592    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.612592    9020 round_trippers.go:580]     Audit-Id: edc593f6-368d-4520-bd68-daeb48e250ba
	I0731 23:56:30.612592    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.612592    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.612592    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.612592    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.613043    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.613464    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:30.614017    9020 pod_ready.go:92] pod "kube-scheduler-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:30.614072    9020 pod_ready.go:81] duration metric: took 416.12ms for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:30.614072    9020 pod_ready.go:38] duration metric: took 15.7531301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:56:30.614187    9020 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:56:30.625019    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:56:30.650486    9020 command_runner.go:130] > 1911
	I0731 23:56:30.650486    9020 api_server.go:72] duration metric: took 31.6537148s to wait for apiserver process to appear ...
	I0731 23:56:30.650733    9020 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:56:30.650733    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:56:30.658677    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 200:
	ok
	I0731 23:56:30.659725    9020 round_trippers.go:463] GET https://172.17.27.27:8443/version
	I0731 23:56:30.659773    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.659773    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.659807    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.660666    9020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 23:56:30.660666    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.660666    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.661446    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Content-Length: 263
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Audit-Id: 3fab2983-e7f7-4244-9f68-bff2e1b9b479
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.661500    9020 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0731 23:56:30.661500    9020 api_server.go:141] control plane version: v1.30.3
	I0731 23:56:30.661565    9020 api_server.go:131] duration metric: took 10.832ms to wait for apiserver health ...
	I0731 23:56:30.661565    9020 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:56:30.800066    9020 request.go:629] Waited for 138.0565ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:30.800159    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:30.800159    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.800159    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.800159    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.805401    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:30.805401    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.805401    9020 round_trippers.go:580]     Audit-Id: 3096a232-ffc7-4c7d-b65f-3acb0c901018
	I0731 23:56:30.805401    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.805401    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.805401    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.805401    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.805401    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.809057    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86445 chars]
	I0731 23:56:30.813307    9020 system_pods.go:59] 12 kube-system pods found
	I0731 23:56:30.813396    9020 system_pods.go:61] "coredns-7db6d8ff4d-z8gtw" [41ddb3a7-8405-49e7-88fb-41ab6278e4af] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "etcd-multinode-411400" [4de1ad7a-3a8e-4823-9430-fadd76753763] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "kindnet-bgnqq" [7bb015d3-5a3f-4be8-861c-b29fb76da15c] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "kindnet-cxs2b" [04d92937-d48a-4a21-b4ce-adb78d3cad7f] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "kindnet-j8slc" [d77d4517-d9d3-46d9-a231-1496684afe1d] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "kube-apiserver-multinode-411400" [eaabee4a-7fb0-455f-b354-3fae71ca2878] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-controller-manager-multinode-411400" [217a4087-49b2-4b74-a094-e027a51cf503] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-proxy-5j8pv" [761c8479-d25f-4142-93b6-23b0d1e3ccb7] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-proxy-chdxg" [f3405391-f4cb-4ffe-8d51-d669e37d0a3b] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-proxy-g7tpl" [c8356e2e-b324-4001-9b82-18a13b436517] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-scheduler-multinode-411400" [a10cf66c-3049-48d4-9ab1-8667efc59977] Running
	I0731 23:56:30.813535    9020 system_pods.go:61] "storage-provisioner" [f33ea8e6-6b88-471e-a471-d3c4faf9de93] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:56:30.813568    9020 system_pods.go:74] duration metric: took 152.0008ms to wait for pod list to return data ...
	I0731 23:56:30.813568    9020 default_sa.go:34] waiting for default service account to be created ...
	I0731 23:56:30.986667    9020 request.go:629] Waited for 172.7652ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/default/serviceaccounts
	I0731 23:56:30.986750    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/default/serviceaccounts
	I0731 23:56:30.986909    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.986909    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.986909    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.990685    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:30.990810    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Content-Length: 262
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:31 GMT
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Audit-Id: d8540bf8-7b0d-4002-8a5e-e23b3f9bc435
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.990810    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.990810    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.990908    9020 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"16d02427-a81b-4fff-a90d-597cdeb70239","resourceVersion":"315","creationTimestamp":"2024-07-31T23:32:40Z"}}]}
	I0731 23:56:30.990973    9020 default_sa.go:45] found service account: "default"
	I0731 23:56:30.990973    9020 default_sa.go:55] duration metric: took 177.4028ms for default service account to be created ...
	I0731 23:56:30.990973    9020 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 23:56:31.191030    9020 request.go:629] Waited for 199.763ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:31.191030    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:31.191030    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:31.191030    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:31.191149    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:31.197931    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:31.197931    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:31.197931    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:31 GMT
	I0731 23:56:31.197931    9020 round_trippers.go:580]     Audit-Id: 01c5fa32-b66a-4bcc-81e7-3a631d8922a5
	I0731 23:56:31.197931    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:31.197931    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:31.197931    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:31.197931    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:31.200783    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86445 chars]
	I0731 23:56:31.205819    9020 system_pods.go:86] 12 kube-system pods found
	I0731 23:56:31.205819    9020 system_pods.go:89] "coredns-7db6d8ff4d-z8gtw" [41ddb3a7-8405-49e7-88fb-41ab6278e4af] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "etcd-multinode-411400" [4de1ad7a-3a8e-4823-9430-fadd76753763] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kindnet-bgnqq" [7bb015d3-5a3f-4be8-861c-b29fb76da15c] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kindnet-cxs2b" [04d92937-d48a-4a21-b4ce-adb78d3cad7f] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kindnet-j8slc" [d77d4517-d9d3-46d9-a231-1496684afe1d] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-apiserver-multinode-411400" [eaabee4a-7fb0-455f-b354-3fae71ca2878] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-controller-manager-multinode-411400" [217a4087-49b2-4b74-a094-e027a51cf503] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-proxy-5j8pv" [761c8479-d25f-4142-93b6-23b0d1e3ccb7] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-proxy-chdxg" [f3405391-f4cb-4ffe-8d51-d669e37d0a3b] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-proxy-g7tpl" [c8356e2e-b324-4001-9b82-18a13b436517] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-scheduler-multinode-411400" [a10cf66c-3049-48d4-9ab1-8667efc59977] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "storage-provisioner" [f33ea8e6-6b88-471e-a471-d3c4faf9de93] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:56:31.205819    9020 system_pods.go:126] duration metric: took 214.8437ms to wait for k8s-apps to be running ...
	I0731 23:56:31.205819    9020 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:56:31.215985    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:56:31.241087    9020 system_svc.go:56] duration metric: took 35.2672ms WaitForService to wait for kubelet
	I0731 23:56:31.241759    9020 kubeadm.go:582] duration metric: took 32.24498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:56:31.241759    9020 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:56:31.394062    9020 request.go:629] Waited for 152.1922ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes
	I0731 23:56:31.394241    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes
	I0731 23:56:31.394241    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:31.394241    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:31.394241    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:31.398061    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:31.398061    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:31.398061    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:31 GMT
	I0731 23:56:31.398061    9020 round_trippers.go:580]     Audit-Id: 6beca473-58a1-411f-b3b4-5911b8ce6cb2
	I0731 23:56:31.398061    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:31.398061    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:31.398061    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:31.398061    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:31.399068    9020 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15498 chars]
	I0731 23:56:31.399854    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:56:31.399854    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:56:31.399854    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:56:31.399854    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:56:31.399854    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:56:31.399854    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:56:31.399854    9020 node_conditions.go:105] duration metric: took 158.0935ms to run NodePressure ...
	I0731 23:56:31.399854    9020 start.go:241] waiting for startup goroutines ...
	I0731 23:56:31.399854    9020 start.go:246] waiting for cluster config update ...
	I0731 23:56:31.399854    9020 start.go:255] writing updated cluster config ...
	I0731 23:56:31.405016    9020 out.go:177] 
	I0731 23:56:31.408456    9020 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:56:31.417457    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:56:31.417817    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:56:31.424146    9020 out.go:177] * Starting "multinode-411400-m02" worker node in "multinode-411400" cluster
	I0731 23:56:31.426557    9020 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:56:31.426557    9020 cache.go:56] Caching tarball of preloaded images
	I0731 23:56:31.427490    9020 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 23:56:31.427490    9020 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 23:56:31.428157    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:56:31.430754    9020 start.go:360] acquireMachinesLock for multinode-411400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:56:31.430920    9020 start.go:364] duration metric: took 77.8µs to acquireMachinesLock for "multinode-411400-m02"
	I0731 23:56:31.431066    9020 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:56:31.431066    9020 fix.go:54] fixHost starting: m02
	I0731 23:56:31.431918    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:33.459032    9020 main.go:141] libmachine: [stdout =====>] : Off
	
	I0731 23:56:33.459737    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:33.459737    9020 fix.go:112] recreateIfNeeded on multinode-411400-m02: state=Stopped err=<nil>
	W0731 23:56:33.459737    9020 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:56:33.466096    9020 out.go:177] * Restarting existing hyperv VM for "multinode-411400-m02" ...
	I0731 23:56:33.468742    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-411400-m02
	I0731 23:56:36.455289    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:36.455289    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:36.455289    9020 main.go:141] libmachine: Waiting for host to start...
	I0731 23:56:36.455538    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:38.679348    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:56:38.680031    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:38.680031    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:56:41.113511    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:41.113980    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:42.121759    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:44.317525    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:56:44.317525    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:44.317757    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:56:46.787395    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:46.787395    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:47.791046    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:49.968009    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:56:49.969035    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:49.969035    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:56:52.514205    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:52.514205    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:53.526893    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:55.692596    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:56:55.692596    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:55.692670    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:56:58.197663    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:58.197763    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:59.213925    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:01.450438    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:01.450438    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:01.450885    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:03.993253    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:03.993253    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:03.996232    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:06.166158    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:06.166158    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:06.166313    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:08.620940    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:08.622061    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:08.622252    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:57:08.625065    9020 machine.go:94] provisionDockerMachine start ...
	I0731 23:57:08.625065    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:10.780775    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:10.781016    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:10.781110    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:13.290807    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:13.290807    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:13.296391    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:13.297265    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:13.297265    9020 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:57:13.430640    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 23:57:13.430640    9020 buildroot.go:166] provisioning hostname "multinode-411400-m02"
	I0731 23:57:13.430746    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:15.582351    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:15.582889    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:15.582889    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:18.111516    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:18.111516    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:18.118209    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:18.119099    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:18.119099    9020 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-411400-m02 && echo "multinode-411400-m02" | sudo tee /etc/hostname
	I0731 23:57:18.273593    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-411400-m02
	
	I0731 23:57:18.273753    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:20.411341    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:20.411341    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:20.411341    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:22.938968    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:22.938968    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:22.946189    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:22.946890    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:22.946890    9020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-411400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-411400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-411400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:57:23.091608    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:57:23.091608    9020 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 23:57:23.092148    9020 buildroot.go:174] setting up certificates
	I0731 23:57:23.092188    9020 provision.go:84] configureAuth start
	I0731 23:57:23.092188    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:25.189438    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:25.189438    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:25.189646    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:27.671106    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:27.671106    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:27.672135    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:29.762607    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:29.763680    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:29.763680    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:32.228880    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:32.228880    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:32.228880    9020 provision.go:143] copyHostCerts
	I0731 23:57:32.229955    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 23:57:32.230419    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 23:57:32.230419    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 23:57:32.230419    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 23:57:32.232410    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 23:57:32.232573    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 23:57:32.232573    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 23:57:32.233121    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 23:57:32.234017    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 23:57:32.234017    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 23:57:32.234017    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 23:57:32.234836    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 23:57:32.235953    9020 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-411400-m02 san=[127.0.0.1 172.17.23.93 localhost minikube multinode-411400-m02]
	I0731 23:57:32.347842    9020 provision.go:177] copyRemoteCerts
	I0731 23:57:32.360165    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:57:32.360165    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:34.464608    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:34.464608    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:34.465609    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:36.937335    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:36.937428    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:36.937922    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:57:37.038513    9020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6782279s)
	I0731 23:57:37.038513    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 23:57:37.038820    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 23:57:37.088415    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 23:57:37.088415    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0731 23:57:37.131429    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 23:57:37.131429    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 23:57:37.176878    9020 provision.go:87] duration metric: took 14.0845097s to configureAuth
	I0731 23:57:37.176878    9020 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:57:37.177829    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:57:37.178102    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:39.240662    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:39.240662    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:39.240779    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:41.702584    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:41.702584    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:41.709210    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:41.709422    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:41.709422    9020 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 23:57:41.833308    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 23:57:41.833308    9020 buildroot.go:70] root file system type: tmpfs
	I0731 23:57:41.833308    9020 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 23:57:41.833308    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:43.894372    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:43.895432    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:43.895432    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:46.351837    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:46.351837    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:46.357803    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:46.358289    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:46.358508    9020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.27.27"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 23:57:46.508005    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.27.27
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 23:57:46.508005    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:48.572007    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:48.572343    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:48.572343    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:51.050585    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:51.050585    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:51.057157    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:51.057358    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:51.057358    9020 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 23:57:53.410347    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 23:57:53.410347    9020 machine.go:97] duration metric: took 44.7847085s to provisionDockerMachine
	I0731 23:57:53.410347    9020 start.go:293] postStartSetup for "multinode-411400-m02" (driver="hyperv")
	I0731 23:57:53.410347    9020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:57:53.423939    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:57:53.423939    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:55.517724    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:55.517724    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:55.517724    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:57.993554    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:57.993554    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:57.993554    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:57:58.109764    9020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6856449s)
	I0731 23:57:58.122213    9020 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:57:58.128602    9020 command_runner.go:130] > NAME=Buildroot
	I0731 23:57:58.128843    9020 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 23:57:58.128843    9020 command_runner.go:130] > ID=buildroot
	I0731 23:57:58.128843    9020 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 23:57:58.128843    9020 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 23:57:58.128950    9020 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:57:58.128950    9020 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 23:57:58.129443    9020 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 23:57:58.130141    9020 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 23:57:58.130141    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 23:57:58.142020    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:57:58.161164    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 23:57:58.202992    9020 start.go:296] duration metric: took 4.792583s for postStartSetup
	I0731 23:57:58.203053    9020 fix.go:56] duration metric: took 1m26.7708778s for fixHost
	I0731 23:57:58.203053    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:00.249617    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:00.249617    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:00.250226    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:02.713803    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:02.713803    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:02.719658    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:58:02.720912    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:58:02.720912    9020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 23:58:02.851877    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722470282.869163899
	
	I0731 23:58:02.851877    9020 fix.go:216] guest clock: 1722470282.869163899
	I0731 23:58:02.851974    9020 fix.go:229] Guest: 2024-07-31 23:58:02.869163899 +0000 UTC Remote: 2024-07-31 23:57:58.2030531 +0000 UTC m=+250.164330401 (delta=4.666110799s)
	I0731 23:58:02.852107    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:04.955430    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:04.955430    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:04.955594    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:07.419751    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:07.420768    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:07.425610    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:58:07.426138    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:58:07.426280    9020 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722470282
	I0731 23:58:07.570849    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 23:58:02 UTC 2024
	
	I0731 23:58:07.570849    9020 fix.go:236] clock set: Wed Jul 31 23:58:02 UTC 2024
	 (err=<nil>)
	I0731 23:58:07.570849    9020 start.go:83] releasing machines lock for "multinode-411400-m02", held for 1m36.1386996s
	I0731 23:58:07.571860    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:09.681890    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:09.681890    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:09.682751    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:12.140483    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:12.140483    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:12.144088    9020 out.go:177] * Found network options:
	I0731 23:58:12.159004    9020 out.go:177]   - NO_PROXY=172.17.27.27
	W0731 23:58:12.161911    9020 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 23:58:12.164160    9020 out.go:177]   - NO_PROXY=172.17.27.27
	W0731 23:58:12.167340    9020 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 23:58:12.168575    9020 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 23:58:12.170205    9020 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 23:58:12.170205    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:12.180883    9020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:58:12.180883    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:14.328189    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:14.328189    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:14.328356    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:14.338915    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:14.338915    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:14.338915    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:16.865874    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:16.865874    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:16.866760    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:58:16.895386    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:16.895598    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:16.896344    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:58:16.950662    9020 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0731 23:58:16.951331    9020 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.7810639s)
	W0731 23:58:16.951331    9020 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 23:58:16.986443    9020 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0731 23:58:16.986545    9020 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8056006s)
	W0731 23:58:16.986545    9020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:58:17.000960    9020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:58:17.028958    9020 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0731 23:58:17.029735    9020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:58:17.029779    9020 start.go:495] detecting cgroup driver to use...
	I0731 23:58:17.029992    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0731 23:58:17.061696    9020 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 23:58:17.061696    9020 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 23:58:17.067436    9020 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0731 23:58:17.080066    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 23:58:17.109687    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 23:58:17.130310    9020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 23:58:17.141350    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 23:58:17.171079    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:58:17.205305    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 23:58:17.238180    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:58:17.268589    9020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:58:17.299895    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 23:58:17.331112    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 23:58:17.363965    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 23:58:17.398504    9020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:58:17.414579    9020 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 23:58:17.426331    9020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:58:17.456537    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:17.646236    9020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 23:58:17.677662    9020 start.go:495] detecting cgroup driver to use...
	I0731 23:58:17.692941    9020 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 23:58:17.712765    9020 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0731 23:58:17.712765    9020 command_runner.go:130] > [Unit]
	I0731 23:58:17.712765    9020 command_runner.go:130] > Description=Docker Application Container Engine
	I0731 23:58:17.712765    9020 command_runner.go:130] > Documentation=https://docs.docker.com
	I0731 23:58:17.712765    9020 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0731 23:58:17.712765    9020 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0731 23:58:17.712765    9020 command_runner.go:130] > StartLimitBurst=3
	I0731 23:58:17.712765    9020 command_runner.go:130] > StartLimitIntervalSec=60
	I0731 23:58:17.713027    9020 command_runner.go:130] > [Service]
	I0731 23:58:17.713027    9020 command_runner.go:130] > Type=notify
	I0731 23:58:17.713027    9020 command_runner.go:130] > Restart=on-failure
	I0731 23:58:17.713027    9020 command_runner.go:130] > Environment=NO_PROXY=172.17.27.27
	I0731 23:58:17.713027    9020 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0731 23:58:17.713107    9020 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0731 23:58:17.713107    9020 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0731 23:58:17.713107    9020 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0731 23:58:17.713107    9020 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0731 23:58:17.713107    9020 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0731 23:58:17.713107    9020 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0731 23:58:17.713222    9020 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0731 23:58:17.713222    9020 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0731 23:58:17.713222    9020 command_runner.go:130] > ExecStart=
	I0731 23:58:17.713222    9020 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0731 23:58:17.713222    9020 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0731 23:58:17.713222    9020 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0731 23:58:17.713222    9020 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0731 23:58:17.713222    9020 command_runner.go:130] > LimitNOFILE=infinity
	I0731 23:58:17.713222    9020 command_runner.go:130] > LimitNPROC=infinity
	I0731 23:58:17.713222    9020 command_runner.go:130] > LimitCORE=infinity
	I0731 23:58:17.713222    9020 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0731 23:58:17.713222    9020 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0731 23:58:17.713388    9020 command_runner.go:130] > TasksMax=infinity
	I0731 23:58:17.713388    9020 command_runner.go:130] > TimeoutStartSec=0
	I0731 23:58:17.713388    9020 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0731 23:58:17.713388    9020 command_runner.go:130] > Delegate=yes
	I0731 23:58:17.713388    9020 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0731 23:58:17.713388    9020 command_runner.go:130] > KillMode=process
	I0731 23:58:17.713388    9020 command_runner.go:130] > [Install]
	I0731 23:58:17.713388    9020 command_runner.go:130] > WantedBy=multi-user.target
	I0731 23:58:17.724869    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:58:17.757721    9020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:58:17.802141    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:58:17.835425    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:58:17.873719    9020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 23:58:17.938462    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:58:17.962106    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:58:17.993778    9020 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0731 23:58:18.006466    9020 ssh_runner.go:195] Run: which cri-dockerd
	I0731 23:58:18.011447    9020 command_runner.go:130] > /usr/bin/cri-dockerd
	I0731 23:58:18.023583    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 23:58:18.044901    9020 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 23:58:18.086509    9020 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 23:58:18.276529    9020 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 23:58:18.480819    9020 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 23:58:18.480862    9020 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 23:58:18.529857    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:18.710093    9020 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 23:58:21.363247    9020 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6530532s)
	I0731 23:58:21.374651    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 23:58:21.406334    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:58:21.438100    9020 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 23:58:21.640588    9020 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 23:58:21.833094    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:22.025929    9020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 23:58:22.073671    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:58:22.106667    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:22.304764    9020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 23:58:22.407388    9020 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 23:58:22.418912    9020 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 23:58:22.427532    9020 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0731 23:58:22.427532    9020 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 23:58:22.427626    9020 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0731 23:58:22.427626    9020 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0731 23:58:22.427626    9020 command_runner.go:130] > Access: 2024-07-31 23:58:22.352111940 +0000
	I0731 23:58:22.427626    9020 command_runner.go:130] > Modify: 2024-07-31 23:58:22.352111940 +0000
	I0731 23:58:22.427714    9020 command_runner.go:130] > Change: 2024-07-31 23:58:22.356112007 +0000
	I0731 23:58:22.427714    9020 command_runner.go:130] >  Birth: -
	I0731 23:58:22.427714    9020 start.go:563] Will wait 60s for crictl version
	I0731 23:58:22.439004    9020 ssh_runner.go:195] Run: which crictl
	I0731 23:58:22.449994    9020 command_runner.go:130] > /usr/bin/crictl
	I0731 23:58:22.462804    9020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:58:22.512438    9020 command_runner.go:130] > Version:  0.1.0
	I0731 23:58:22.512673    9020 command_runner.go:130] > RuntimeName:  docker
	I0731 23:58:22.512673    9020 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0731 23:58:22.512673    9020 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 23:58:22.512865    9020 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 23:58:22.522517    9020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:58:22.559682    9020 command_runner.go:130] > 27.1.1
	I0731 23:58:22.567961    9020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:58:22.603762    9020 command_runner.go:130] > 27.1.1
	I0731 23:58:22.608104    9020 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 23:58:22.610970    9020 out.go:177]   - env NO_PROXY=172.17.27.27
	I0731 23:58:22.613514    9020 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 23:58:22.616974    9020 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 23:58:22.616974    9020 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 23:58:22.616974    9020 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 23:58:22.616974    9020 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 23:58:22.619552    9020 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 23:58:22.619552    9020 ip.go:210] interface addr: 172.17.16.1/20
	I0731 23:58:22.629115    9020 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 23:58:22.635550    9020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:58:22.654821    9020 mustload.go:65] Loading cluster: multinode-411400
	I0731 23:58:22.655049    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:58:22.655887    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:58:24.707903    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:24.708802    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:24.708802    9020 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:58:24.709636    9020 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400 for IP: 172.17.23.93
	I0731 23:58:24.709712    9020 certs.go:194] generating shared ca certs ...
	I0731 23:58:24.709712    9020 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:58:24.710365    9020 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 23:58:24.710591    9020 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 23:58:24.710591    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 23:58:24.711295    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 23:58:24.711428    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 23:58:24.711661    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 23:58:24.711661    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 23:58:24.712416    9020 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 23:58:24.712573    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 23:58:24.712915    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 23:58:24.713027    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 23:58:24.713027    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 23:58:24.714200    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 23:58:24.714488    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 23:58:24.714607    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:24.714852    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 23:58:24.715203    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:58:24.767528    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:58:24.816225    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:58:24.857310    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 23:58:24.902498    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 23:58:24.945561    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:58:24.990258    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 23:58:25.044070    9020 ssh_runner.go:195] Run: openssl version
	I0731 23:58:25.052661    9020 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 23:58:25.064021    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 23:58:25.093249    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 23:58:25.100542    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:58:25.100542    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:58:25.112153    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 23:58:25.121004    9020 command_runner.go:130] > 3ec20f2e
	I0731 23:58:25.131199    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:58:25.160253    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:58:25.191392    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:25.198387    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:25.198764    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:25.209806    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:25.218749    9020 command_runner.go:130] > b5213941
	I0731 23:58:25.230640    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:58:25.257897    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 23:58:25.286898    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 23:58:25.294261    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:58:25.294261    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:58:25.307194    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 23:58:25.316519    9020 command_runner.go:130] > 51391683
	I0731 23:58:25.327521    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 23:58:25.360208    9020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:58:25.366143    9020 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:58:25.366659    9020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:58:25.366659    9020 kubeadm.go:934] updating node {m02 172.17.23.93 8443 v1.30.3 docker false true} ...
	I0731 23:58:25.367188    9020 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-411400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.23.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:58:25.377903    9020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:58:25.395824    9020 command_runner.go:130] > kubeadm
	I0731 23:58:25.395824    9020 command_runner.go:130] > kubectl
	I0731 23:58:25.395824    9020 command_runner.go:130] > kubelet
	I0731 23:58:25.395824    9020 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:58:25.406854    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0731 23:58:25.421818    9020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0731 23:58:25.449816    9020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:58:25.491058    9020 ssh_runner.go:195] Run: grep 172.17.27.27	control-plane.minikube.internal$ /etc/hosts
	I0731 23:58:25.497367    9020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.27.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:58:25.528063    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:25.708148    9020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:58:25.740286    9020 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:58:25.740431    9020 start.go:317] joinCluster: &{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.27.27 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:58:25.741198    9020 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:58:25.741198    9020 host.go:66] Checking if "multinode-411400-m02" exists ...
	I0731 23:58:25.742019    9020 mustload.go:65] Loading cluster: multinode-411400
	I0731 23:58:25.742568    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:58:25.743132    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:58:27.860780    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:27.860780    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:27.860780    9020 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:58:27.861827    9020 api_server.go:166] Checking apiserver status ...
	I0731 23:58:27.873163    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:58:27.873163    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:58:29.998902    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:29.998992    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:29.999203    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:32.479095    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:58:32.479228    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:32.479684    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:58:32.579284    9020 command_runner.go:130] > 1911
	I0731 23:58:32.579595    9020 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.7063716s)
	I0731 23:58:32.590520    9020 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1911/cgroup
	W0731 23:58:32.607768    9020 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1911/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 23:58:32.618398    9020 ssh_runner.go:195] Run: ls
	I0731 23:58:32.626175    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:58:32.633426    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 200:
	ok
	I0731 23:58:32.647962    9020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-411400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0731 23:58:32.808520    9020 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-bgnqq, kube-system/kube-proxy-g7tpl
	I0731 23:58:35.845254    9020 command_runner.go:130] > node/multinode-411400-m02 cordoned
	I0731 23:58:35.846056    9020 command_runner.go:130] > pod "busybox-fc5497c4f-lxslb" has DeletionTimestamp older than 1 seconds, skipping
	I0731 23:58:35.846056    9020 command_runner.go:130] > node/multinode-411400-m02 drained
	I0731 23:58:35.846177    9020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-411400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.198053s)
	I0731 23:58:35.846277    9020 node.go:128] successfully drained node "multinode-411400-m02"
	I0731 23:58:35.846350    9020 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0731 23:58:35.846477    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:37.951299    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:37.951299    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:37.952315    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:40.384012    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:40.384012    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:40.384766    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:58:40.832967    9020 command_runner.go:130] ! W0731 23:58:40.856078    1635 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0731 23:58:41.345135    9020 command_runner.go:130] ! W0731 23:58:41.367177    1635 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 1653d476284eb708686ef6a5a5a7142570e3a21b7242b46d0639cab2724982d8: output: E0731 23:58:41.075523    1673 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-lxslb_default\" network: cni config uninitialized" podSandboxID="1653d476284eb708686ef6a5a5a7142570e3a21b7242b46d0639cab2724982d8"
	I0731 23:58:41.345259    9020 command_runner.go:130] ! time="2024-07-31T23:58:41Z" level=fatal msg="stopping the pod sandbox \"1653d476284eb708686ef6a5a5a7142570e3a21b7242b46d0639cab2724982d8\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-lxslb_default\" network: cni config uninitialized"
	I0731 23:58:41.345259    9020 command_runner.go:130] ! : exit status 1
	I0731 23:58:41.374021    9020 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 23:58:41.374021    9020 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0731 23:58:41.374122    9020 command_runner.go:130] > [reset] Stopping the kubelet service
	I0731 23:58:41.374122    9020 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0731 23:58:41.374122    9020 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0731 23:58:41.374200    9020 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0731 23:58:41.374200    9020 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0731 23:58:41.374200    9020 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0731 23:58:41.374200    9020 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0731 23:58:41.374200    9020 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0731 23:58:41.374200    9020 command_runner.go:130] > to reset your system's IPVS tables.
	I0731 23:58:41.374200    9020 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0731 23:58:41.374200    9020 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0731 23:58:41.374200    9020 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.5277798s)
	I0731 23:58:41.374200    9020 node.go:155] successfully reset node "multinode-411400-m02"
	I0731 23:58:41.375134    9020 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:58:41.376152    9020 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.27.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:58:41.377952    9020 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 23:58:41.378287    9020 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0731 23:58:41.378477    9020 round_trippers.go:463] DELETE https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:41.378525    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:41.378544    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:41.378544    9020 round_trippers.go:473]     Content-Type: application/json
	I0731 23:58:41.378544    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:41.394932    9020 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 23:58:41.394932    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:41.394932    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:41.394932    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Content-Length: 171
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:41 GMT
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Audit-Id: 5ff25fd2-5cf5-4721-8f46-4285b5eb7aec
	I0731 23:58:41.395408    9020 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-411400-m02","kind":"nodes","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36"}}
	I0731 23:58:41.395445    9020 node.go:180] successfully deleted node "multinode-411400-m02"
	I0731 23:58:41.395445    9020 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:58:41.395445    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 23:58:41.395551    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:58:43.439282    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:43.439472    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:43.439596    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:45.904957    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:58:45.904957    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:45.905920    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:58:46.078857    9020 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fpqpq2.n6fap37a7e7fcvye --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf 
	I0731 23:58:46.079025    9020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6833519s)
	I0731 23:58:46.079051    9020 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:58:46.079129    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fpqpq2.n6fap37a7e7fcvye --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-411400-m02"
	I0731 23:58:46.299769    9020 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:58:47.641371    9020 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 23:58:47.641535    9020 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0731 23:58:47.641573    9020 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0731 23:58:47.641573    9020 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:58:47.641573    9020 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:58:47.641658    9020 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 23:58:47.641658    9020 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 23:58:47.641705    9020 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.004055243s
	I0731 23:58:47.641705    9020 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0731 23:58:47.641705    9020 command_runner.go:130] > This node has joined the cluster:
	I0731 23:58:47.641705    9020 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0731 23:58:47.641705    9020 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0731 23:58:47.641705    9020 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0731 23:58:47.641808    9020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fpqpq2.n6fap37a7e7fcvye --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-411400-m02": (1.5626592s)
	I0731 23:58:47.641808    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 23:58:47.856650    9020 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0731 23:58:48.049389    9020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-411400-m02 minikube.k8s.io/updated_at=2024_07_31T23_58_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=multinode-411400 minikube.k8s.io/primary=false
	I0731 23:58:48.161969    9020 command_runner.go:130] > node/multinode-411400-m02 labeled
	I0731 23:58:48.162026    9020 start.go:319] duration metric: took 22.4213074s to joinCluster
	I0731 23:58:48.162260    9020 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:58:48.162948    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:58:48.165160    9020 out.go:177] * Verifying Kubernetes components...
	I0731 23:58:48.180661    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:48.391789    9020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:58:48.416800    9020 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:58:48.416800    9020 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.27.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:58:48.417807    9020 node_ready.go:35] waiting up to 6m0s for node "multinode-411400-m02" to be "Ready" ...
	I0731 23:58:48.417807    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:48.417807    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:48.417807    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:48.417807    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:48.422134    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:58:48.422134    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:48.422134    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:48 GMT
	I0731 23:58:48.422134    9020 round_trippers.go:580]     Audit-Id: 97b208d4-c7b7-47ff-86a1-3469207dd103
	I0731 23:58:48.422134    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:48.422134    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:48.422134    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:48.422134    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:48.422392    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:48.924542    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:48.924619    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:48.924619    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:48.924619    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:48.927375    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:48.928196    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:48.928196    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:48.928196    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:48.928196    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:48 GMT
	I0731 23:58:48.928196    9020 round_trippers.go:580]     Audit-Id: d44b82a7-8479-44ba-af77-211fdbbd4026
	I0731 23:58:48.928196    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:48.928196    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:48.928455    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:49.423488    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:49.423488    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:49.423488    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:49.423488    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:49.428267    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:58:49.428267    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:49.428267    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:49.428267    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:49.428267    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:49 GMT
	I0731 23:58:49.428267    9020 round_trippers.go:580]     Audit-Id: cef3f9d6-7fda-4634-8599-9dc663ce3cd5
	I0731 23:58:49.428267    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:49.428267    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:49.430063    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:49.923430    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:49.923785    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:49.923785    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:49.923785    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:49.927252    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:49.927828    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:49.927828    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:49.927828    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:49.927828    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:49.927828    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:49.927828    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:49 GMT
	I0731 23:58:49.927828    9020 round_trippers.go:580]     Audit-Id: e2e1cb25-b798-4026-9fb7-3e9f99e822e4
	I0731 23:58:49.928058    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:50.429609    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:50.429872    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:50.429872    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:50.429872    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:50.433192    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:50.433192    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:50.433192    9020 round_trippers.go:580]     Audit-Id: 8a8ed614-6180-46c4-89de-5d9f960e5507
	I0731 23:58:50.433192    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:50.433192    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:50.433192    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:50.433192    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:50.433192    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:50 GMT
	I0731 23:58:50.434190    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:50.434190    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:50.932365    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:50.932631    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:50.932631    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:50.932631    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:50.935309    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:50.935309    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:50.935309    9020 round_trippers.go:580]     Audit-Id: 779f9a9d-07ae-45ec-b9f7-85c3dfdb88bb
	I0731 23:58:50.935309    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:50.935535    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:50.935535    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:50.935535    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:50.935535    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:50 GMT
	I0731 23:58:50.935762    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:51.423910    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:51.424223    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:51.424223    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:51.424223    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:51.426610    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:51.426610    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:51.427544    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:51.427544    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:51.427544    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:51 GMT
	I0731 23:58:51.427544    9020 round_trippers.go:580]     Audit-Id: aa9edc84-854f-4071-ba9b-ecb8913e7019
	I0731 23:58:51.427684    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:51.427684    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:51.427855    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:51.925409    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:51.925503    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:51.925503    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:51.925571    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:51.928105    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:51.928205    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:51.928205    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:51.928205    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:51.928244    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:51 GMT
	I0731 23:58:51.928244    9020 round_trippers.go:580]     Audit-Id: b6447ea5-71d4-460e-a224-ebb4d20165eb
	I0731 23:58:51.928244    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:51.928244    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:51.928509    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:52.425890    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:52.425890    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:52.426018    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:52.426018    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:52.430345    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:58:52.431055    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:52.431055    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:52.431055    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:52.431055    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:52 GMT
	I0731 23:58:52.431055    9020 round_trippers.go:580]     Audit-Id: 7b8194fe-551b-442d-92a4-63f0c8358bd6
	I0731 23:58:52.431055    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:52.431055    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:52.431194    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:52.926821    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:52.926821    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:52.926899    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:52.926899    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:52.930559    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:52.931246    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:52.931246    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:52 GMT
	I0731 23:58:52.931246    9020 round_trippers.go:580]     Audit-Id: 2d1655bc-ce0e-49c9-b96b-81dde0b3b107
	I0731 23:58:52.931309    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:52.931309    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:52.931309    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:52.931309    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:52.931866    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:52.932470    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:53.428845    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:53.428845    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:53.428845    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:53.428845    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:53.431739    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:53.432450    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:53.432450    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:53.432450    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:53.432450    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:53.432450    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:53 GMT
	I0731 23:58:53.432450    9020 round_trippers.go:580]     Audit-Id: 880d6467-9cc4-477b-868e-978da0ea6038
	I0731 23:58:53.432450    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:53.432642    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:53.928096    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:53.928096    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:53.928179    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:53.928179    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:53.934268    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:58:53.934416    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:53.934416    9020 round_trippers.go:580]     Audit-Id: f446bb52-6a2b-439b-84b0-f68d0551617b
	I0731 23:58:53.934416    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:53.934416    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:53.934416    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:53.934416    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:53.934416    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:53 GMT
	I0731 23:58:53.934574    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:54.427915    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:54.427915    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:54.427915    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:54.427915    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:54.430044    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:54.431037    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:54.431060    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:54.431060    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:54.431060    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:54 GMT
	I0731 23:58:54.431060    9020 round_trippers.go:580]     Audit-Id: 54b3e86f-7651-4edf-ade0-184cc5893ed3
	I0731 23:58:54.431060    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:54.431060    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:54.431305    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:54.927678    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:54.927678    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:54.927678    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:54.927794    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:54.931237    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:54.931721    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:54.931721    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:54.931721    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:54 GMT
	I0731 23:58:54.931721    9020 round_trippers.go:580]     Audit-Id: 184110fe-97e7-429e-a2a9-771fc31746af
	I0731 23:58:54.931721    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:54.931799    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:54.931799    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:54.932482    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:54.933025    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:55.428377    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:55.428441    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:55.428441    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:55.428441    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:55.432426    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:55.432426    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:55.432426    9020 round_trippers.go:580]     Audit-Id: 01a1c1d3-bab3-489a-a97b-fcdb255bf3fd
	I0731 23:58:55.432426    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:55.432490    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:55.432490    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:55.432490    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:55.432529    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:55 GMT
	I0731 23:58:55.432854    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:55.926083    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:55.926083    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:55.926217    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:55.926217    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:55.929610    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:55.930072    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:55.930072    9020 round_trippers.go:580]     Audit-Id: 6f6429f7-b703-4ba2-b1d5-7b679c74f6ee
	I0731 23:58:55.930072    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:55.930072    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:55.930072    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:55.930072    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:55.930133    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:55 GMT
	I0731 23:58:55.930206    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:56.423647    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:56.423647    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:56.423647    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:56.423647    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:56.426377    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:56.427092    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:56.427092    9020 round_trippers.go:580]     Audit-Id: acd37543-58f2-4106-a5b6-ebd7057efddc
	I0731 23:58:56.427092    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:56.427092    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:56.427092    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:56.427092    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:56.427092    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:56 GMT
	I0731 23:58:56.427499    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:56.921075    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:56.921075    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:56.921075    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:56.921075    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:56.924780    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:56.924928    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:56.924928    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:56.924928    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:56 GMT
	I0731 23:58:56.924928    9020 round_trippers.go:580]     Audit-Id: c736b7e7-4aa7-4b9c-9485-7700e79e1e1c
	I0731 23:58:56.924928    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:56.924928    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:56.924928    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:56.925152    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:57.420202    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:57.420263    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:57.420263    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:57.420263    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:57.424689    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:58:57.424804    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:57.424804    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:57 GMT
	I0731 23:58:57.424804    9020 round_trippers.go:580]     Audit-Id: 21d6d737-db6b-4f65-a4b3-aaae3ed97da7
	I0731 23:58:57.424804    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:57.424804    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:57.424804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:57.424804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:57.424982    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:57.425530    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:57.920338    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:57.920527    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:57.920527    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:57.920527    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:57.927177    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:58:57.927177    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:57.927177    9020 round_trippers.go:580]     Audit-Id: 16b9e477-8a8a-46ac-b6cf-a172c4810466
	I0731 23:58:57.927177    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:57.927177    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:57.927177    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:57.927177    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:57.927177    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:57 GMT
	I0731 23:58:57.927878    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:58:58.431900    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:58.431968    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:58.431968    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:58.431968    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:58.435707    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:58.435707    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:58.435707    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:58.435707    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:58.435707    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:58.435707    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:58 GMT
	I0731 23:58:58.435707    9020 round_trippers.go:580]     Audit-Id: e3655bec-6573-425f-96cc-7e96d43aad47
	I0731 23:58:58.435707    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:58.435707    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:58:58.930450    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:58.930746    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:58.930746    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:58.930746    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:58.934275    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:58.934275    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:58.934275    9020 round_trippers.go:580]     Audit-Id: b1804a51-172d-4bd4-ba5d-1617d1498987
	I0731 23:58:58.934428    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:58.934428    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:58.934428    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:58.934428    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:58.934428    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:58 GMT
	I0731 23:58:58.934536    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:58:59.432686    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:59.432686    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:59.432686    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:59.432686    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:59.435453    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:59.436019    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:59.436019    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:59.436019    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:59 GMT
	I0731 23:58:59.436019    9020 round_trippers.go:580]     Audit-Id: aff48edb-6e80-4ee7-96da-455e6b656597
	I0731 23:58:59.436019    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:59.436019    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:59.436019    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:59.436190    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:58:59.436629    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:59.932258    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:59.932258    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:59.932258    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:59.932258    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:59.936754    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:59.936754    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:59.936754    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:59 GMT
	I0731 23:58:59.936754    9020 round_trippers.go:580]     Audit-Id: 5a1d371d-d437-431f-8dcc-6d4120e0f415
	I0731 23:58:59.936754    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:59.936754    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:59.936754    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:59.936836    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:59.936836    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:00.418551    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:00.418825    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:00.418825    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:00.418825    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:00.425768    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:59:00.425768    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:00.425768    9020 round_trippers.go:580]     Audit-Id: a946f78e-fe6f-4a36-ae33-a5c344754f81
	I0731 23:59:00.425768    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:00.425768    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:00.425768    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:00.425768    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:00.425768    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:00 GMT
	I0731 23:59:00.427751    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:00.919094    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:00.919094    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:00.919094    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:00.919094    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:00.923522    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:00.923611    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:00.923611    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:00.923611    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:00.923611    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:00.923611    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:00 GMT
	I0731 23:59:00.923611    9020 round_trippers.go:580]     Audit-Id: cd248c2e-9b7b-4425-924c-618fca420017
	I0731 23:59:00.923611    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:00.923611    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:01.432428    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:01.432428    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:01.432578    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:01.432578    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:01.435595    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:01.435595    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:01.435595    9020 round_trippers.go:580]     Audit-Id: a06e4862-bff1-4d00-bc65-f171fc4c8696
	I0731 23:59:01.435595    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:01.435595    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:01.435595    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:01.435595    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:01.435595    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:01 GMT
	I0731 23:59:01.435781    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:01.919993    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:01.919993    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:01.919993    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:01.919993    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:01.926043    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:59:01.926043    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:01.926043    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:01.926043    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:01 GMT
	I0731 23:59:01.926043    9020 round_trippers.go:580]     Audit-Id: 4c748ced-92bd-41a8-93fb-cf46866743d3
	I0731 23:59:01.926043    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:01.926043    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:01.926043    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:01.926576    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:01.926688    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:59:02.433682    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:02.434091    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:02.434091    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:02.434091    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:02.437136    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:02.437205    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:02.437205    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:02.437275    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:02 GMT
	I0731 23:59:02.437275    9020 round_trippers.go:580]     Audit-Id: 0d5a2be0-7070-4180-856e-4fa4ab058c29
	I0731 23:59:02.437275    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:02.437275    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:02.437275    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:02.437641    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:02.931720    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:02.931822    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:02.931822    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:02.931822    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:02.935387    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:02.936050    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:02.936050    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:02.936050    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:02.936050    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:02.936050    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:02.936050    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:02 GMT
	I0731 23:59:02.936050    9020 round_trippers.go:580]     Audit-Id: 6e3fd155-1389-44cf-90bd-b127ece7fd87
	I0731 23:59:02.936366    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:03.427963    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:03.427963    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:03.427963    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:03.427963    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:03.432896    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:03.432896    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:03.432896    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:03.432961    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:03.432961    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:03.432961    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:03 GMT
	I0731 23:59:03.433001    9020 round_trippers.go:580]     Audit-Id: def46f35-b318-4de9-b58c-3ef3a7737823
	I0731 23:59:03.433030    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:03.433030    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:03.927524    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:03.927718    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:03.927718    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:03.927718    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:03.934462    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:59:03.934511    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:03.934511    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:03 GMT
	I0731 23:59:03.934511    9020 round_trippers.go:580]     Audit-Id: f456134f-de0a-46ed-9c21-672a03ab69b5
	I0731 23:59:03.934511    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:03.934511    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:03.934511    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:03.934511    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:03.934680    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:03.935223    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:59:04.426519    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:04.426681    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:04.426681    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:04.426681    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:04.430862    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:04.430862    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:04.430862    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:04.430862    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:04.430862    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:04.430862    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:04 GMT
	I0731 23:59:04.430862    9020 round_trippers.go:580]     Audit-Id: 4c51f8e9-6165-46b7-9f71-1e11f400056e
	I0731 23:59:04.430862    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:04.430862    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:04.928153    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:04.928434    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:04.928434    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:04.928434    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:04.931993    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:04.932374    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:04.932374    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:04.932374    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:04.932374    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:04 GMT
	I0731 23:59:04.932374    9020 round_trippers.go:580]     Audit-Id: da76c4af-fcbe-4462-ad2b-44673786c161
	I0731 23:59:04.932450    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:04.932450    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:04.932523    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:05.427915    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:05.428110    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:05.428110    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:05.428110    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:05.430907    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:05.431121    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:05.431121    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:05.431121    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:05 GMT
	I0731 23:59:05.431121    9020 round_trippers.go:580]     Audit-Id: 8905507e-7335-4db7-b7c2-6fbba6373057
	I0731 23:59:05.431121    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:05.431121    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:05.431121    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:05.431298    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:05.926984    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:05.927252    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:05.927252    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:05.927252    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:05.931535    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:05.931535    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:05.931535    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:05 GMT
	I0731 23:59:05.931535    9020 round_trippers.go:580]     Audit-Id: d22ff78f-cb31-48c6-ab8c-ccd7fe858dde
	I0731 23:59:05.931535    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:05.931782    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:05.931782    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:05.931782    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:05.931911    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:06.426988    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:06.427104    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:06.427104    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:06.427104    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:06.430690    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:06.430743    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:06.430743    9020 round_trippers.go:580]     Audit-Id: f1ed9130-f7d8-448a-9d6a-d9f9232759b6
	I0731 23:59:06.430743    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:06.430743    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:06.430743    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:06.430743    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:06.430815    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:06 GMT
	I0731 23:59:06.430946    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:06.431463    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:59:06.925188    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:06.925417    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:06.925417    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:06.925417    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:06.929269    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:06.929269    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:06.929660    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:06.929660    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:06.929660    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:06 GMT
	I0731 23:59:06.929660    9020 round_trippers.go:580]     Audit-Id: df649e0a-9af6-43d5-9695-1ed43c790dc7
	I0731 23:59:06.929660    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:06.929660    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:06.929833    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:07.424387    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:07.424387    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.424387    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.424387    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.426963    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:07.427947    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.427947    9020 round_trippers.go:580]     Audit-Id: 1eff860c-590f-4c88-8453-faf246377be3
	I0731 23:59:07.427947    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.427947    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.427947    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.427947    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.427947    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.428156    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:07.930096    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:07.930326    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.930326    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.930326    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.933880    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:07.934728    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.934728    9020 round_trippers.go:580]     Audit-Id: 96e26882-a6cf-4339-aa16-16a6f8555e5e
	I0731 23:59:07.934728    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.934728    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.934728    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.934728    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.934728    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.934888    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2118","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0731 23:59:07.935418    9020 node_ready.go:49] node "multinode-411400-m02" has status "Ready":"True"
	I0731 23:59:07.935418    9020 node_ready.go:38] duration metric: took 19.5173608s for node "multinode-411400-m02" to be "Ready" ...
	I0731 23:59:07.935610    9020 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:59:07.935610    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:59:07.935769    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.935769    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.935769    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.940101    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:07.941023    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.941023    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.941023    9020 round_trippers.go:580]     Audit-Id: 380fc529-a5ea-4bd4-8856-3567eb717c53
	I0731 23:59:07.941023    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.941023    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.941023    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.941023    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.944222    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2121"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86034 chars]
	I0731 23:59:07.950566    9020 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.950566    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:59:07.950566    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.950566    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.950566    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.955800    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:59:07.955800    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.955800    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.955800    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.955800    9020 round_trippers.go:580]     Audit-Id: 1ba7ee57-d553-4326-b4a8-7201cce5361b
	I0731 23:59:07.955800    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.955800    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.955800    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.955800    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0731 23:59:07.956531    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:07.956531    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.956531    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.956531    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.958753    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:07.958753    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.958753    9020 round_trippers.go:580]     Audit-Id: c6347ecd-45fb-495b-8c24-543440c0266d
	I0731 23:59:07.958753    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.958753    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.958753    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.958753    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.958753    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.958753    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:07.959739    9020 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:07.959739    9020 pod_ready.go:81] duration metric: took 9.1737ms for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.959739    9020 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.959739    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-411400
	I0731 23:59:07.959739    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.959739    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.959739    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.962964    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:07.962964    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.962964    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.962964    9020 round_trippers.go:580]     Audit-Id: 7faf5493-8b3a-4ae2-982f-16640fd8542f
	I0731 23:59:07.962964    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.962964    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.962964    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.962964    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.962964    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"4de1ad7a-3a8e-4823-9430-fadd76753763","resourceVersion":"1862","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.27.27:2379","kubernetes.io/config.hash":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.mirror":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.seen":"2024-07-31T23:55:48.969840438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0731 23:59:07.964309    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:07.964309    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.964388    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.964388    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.972811    9020 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 23:59:07.972811    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.972869    9020 round_trippers.go:580]     Audit-Id: f73cb914-aade-47d6-873c-ff4832873e2e
	I0731 23:59:07.972869    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.972869    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.972869    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.972869    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.972869    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.973065    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:07.973698    9020 pod_ready.go:92] pod "etcd-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:07.973698    9020 pod_ready.go:81] duration metric: took 13.9586ms for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.973895    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.973983    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-411400
	I0731 23:59:07.974014    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.974014    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.974014    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.976300    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:07.976300    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.976300    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.976300    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.976300    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.976300    9020 round_trippers.go:580]     Audit-Id: d44591f8-7847-48c4-83d6-193ef4b32f70
	I0731 23:59:07.976300    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.976300    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.976300    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-411400","namespace":"kube-system","uid":"eaabee4a-7fb0-455f-b354-3fae71ca2878","resourceVersion":"1864","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.27.27:8443","kubernetes.io/config.hash":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.mirror":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.seen":"2024-07-31T23:55:48.898321781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0731 23:59:07.976300    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:07.976300    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.976300    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.976300    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.979963    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:07.979963    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.979963    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.980303    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:07.980303    9020 round_trippers.go:580]     Audit-Id: 721d0be3-2be9-484d-927d-216334075dd3
	I0731 23:59:07.980303    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.980303    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.980303    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.980616    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:07.980715    9020 pod_ready.go:92] pod "kube-apiserver-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:07.980715    9020 pod_ready.go:81] duration metric: took 6.8194ms for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.980715    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.980715    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400
	I0731 23:59:07.980715    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.980715    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.980715    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.985379    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:07.985379    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.985379    9020 round_trippers.go:580]     Audit-Id: 3423b9b4-fd61-40bb-915c-1b1125103937
	I0731 23:59:07.985379    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.985379    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.985379    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.986236    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.986236    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:07.986536    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-411400","namespace":"kube-system","uid":"217a4087-49b2-4b74-a094-e027a51cf503","resourceVersion":"1891","creationTimestamp":"2024-07-31T23:32:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.mirror":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.seen":"2024-07-31T23:32:18.716560513Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0731 23:59:07.987227    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:07.987227    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.987287    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.987287    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.007380    9020 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0731 23:59:08.007779    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.007779    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.007779    9020 round_trippers.go:580]     Audit-Id: 374b06b6-a342-4797-a7fa-37af41ee3c23
	I0731 23:59:08.007779    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.007779    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.007779    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.007779    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.007980    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:08.008162    9020 pod_ready.go:92] pod "kube-controller-manager-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:08.008162    9020 pod_ready.go:81] duration metric: took 27.4466ms for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.008162    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.133531    9020 request.go:629] Waited for 125.2461ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:59:08.133650    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:59:08.133650    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.133650    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.133650    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.137619    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:08.137619    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.137619    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.137619    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.137726    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.137726    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.137726    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.137726    9020 round_trippers.go:580]     Audit-Id: e7fd4fd0-0730-48fc-a1f8-04edeead89d5
	I0731 23:59:08.138121    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5j8pv","generateName":"kube-proxy-","namespace":"kube-system","uid":"761c8479-d25f-4142-93b6-23b0d1e3ccb7","resourceVersion":"1748","creationTimestamp":"2024-07-31T23:40:31Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0731 23:59:08.337338    9020 request.go:629] Waited for 198.5768ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:59:08.337338    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:59:08.337540    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.337540    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.337540    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.340849    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:08.341493    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.341493    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.341493    9020 round_trippers.go:580]     Audit-Id: 512b6bd2-2ed6-4988-9dc0-a25e694716d8
	I0731 23:59:08.341493    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.341493    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.341493    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.341493    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.341668    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m03","uid":"3753504a-97f6-4be0-809b-ee84cbf38121","resourceVersion":"1888","creationTimestamp":"2024-07-31T23:51:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_51_16_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:51:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0731 23:59:08.341668    9020 pod_ready.go:97] node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:59:08.341668    9020 pod_ready.go:81] duration metric: took 333.5016ms for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	E0731 23:59:08.342213    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:59:08.342213    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.540222    9020 request.go:629] Waited for 197.7939ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:59:08.540305    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:59:08.540305    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.540305    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.540305    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.543355    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:08.543404    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.543404    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.543404    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.543404    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.543404    9020 round_trippers.go:580]     Audit-Id: 0ad88411-b846-49dd-8c2a-452d8ab619d4
	I0731 23:59:08.543404    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.543404    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.544272    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-chdxg","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3405391-f4cb-4ffe-8d51-d669e37d0a3b","resourceVersion":"1853","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0731 23:59:08.744654    9020 request.go:629] Waited for 199.2863ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:08.744948    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:08.744948    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.744948    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.744948    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.749260    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:08.749260    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.749260    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.749260    9020 round_trippers.go:580]     Audit-Id: 795928f9-fb69-4d0a-9333-05a3fa77941f
	I0731 23:59:08.749260    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.749260    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.749260    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.749260    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.749630    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:08.750281    9020 pod_ready.go:92] pod "kube-proxy-chdxg" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:08.750307    9020 pod_ready.go:81] duration metric: took 408.0888ms for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.750307    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.931506    9020 request.go:629] Waited for 180.9231ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:59:08.931601    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:59:08.931601    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.931601    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.931601    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.937147    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:59:08.937147    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.937403    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.937403    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.937403    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.937403    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.937403    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.937403    9020 round_trippers.go:580]     Audit-Id: 5a0fb46a-0fe5-4bbc-ad09-b79b9c41c9aa
	I0731 23:59:08.937519    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g7tpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"c8356e2e-b324-4001-9b82-18a13b436517","resourceVersion":"2087","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0731 23:59:09.132453    9020 request.go:629] Waited for 194.3664ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:09.132962    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:09.132962    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:09.132962    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:09.132962    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:09.139757    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:59:09.139757    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:09.139757    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:09 GMT
	I0731 23:59:09.139757    9020 round_trippers.go:580]     Audit-Id: 0a2fd36f-2010-4b1f-8284-9276a7950b50
	I0731 23:59:09.139757    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:09.139757    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:09.139757    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:09.139757    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:09.140431    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2118","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0731 23:59:09.140431    9020 pod_ready.go:92] pod "kube-proxy-g7tpl" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:09.140971    9020 pod_ready.go:81] duration metric: took 390.6589ms for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:09.140971    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:09.336915    9020 request.go:629] Waited for 195.7299ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:59:09.337145    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:59:09.337145    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:09.337145    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:09.337145    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:09.344839    9020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 23:59:09.344839    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:09.344839    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:09.344839    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:09.344839    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:09.344839    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:09.344839    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:09 GMT
	I0731 23:59:09.344839    9020 round_trippers.go:580]     Audit-Id: 44361e7d-f7c3-4627-bbb5-e841f5f15b8a
	I0731 23:59:09.345583    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-411400","namespace":"kube-system","uid":"a10cf66c-3049-48d4-9ab1-8667efc59977","resourceVersion":"1875","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.mirror":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.seen":"2024-07-31T23:32:26.731395457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0731 23:59:09.539294    9020 request.go:629] Waited for 193.624ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:09.539881    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:09.539881    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:09.539881    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:09.539881    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:09.543256    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:09.543256    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:09.543256    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:09.543256    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:09.543256    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:09 GMT
	I0731 23:59:09.543256    9020 round_trippers.go:580]     Audit-Id: d4a86f30-a175-4944-9211-4d89d5ff57e1
	I0731 23:59:09.543256    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:09.543256    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:09.543782    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:09.544395    9020 pod_ready.go:92] pod "kube-scheduler-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:09.544395    9020 pod_ready.go:81] duration metric: took 403.4189ms for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:09.544395    9020 pod_ready.go:38] duration metric: took 1.6087646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:59:09.544395    9020 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:59:09.558362    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:59:09.581786    9020 system_svc.go:56] duration metric: took 37.1708ms WaitForService to wait for kubelet
	I0731 23:59:09.581821    9020 kubeadm.go:582] duration metric: took 21.4192497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:59:09.581821    9020 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:59:09.740424    9020 request.go:629] Waited for 158.4734ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes
	I0731 23:59:09.740563    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes
	I0731 23:59:09.740692    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:09.740692    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:09.740692    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:09.743943    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:09.743943    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:09.743943    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:09.743943    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:09 GMT
	I0731 23:59:09.743943    9020 round_trippers.go:580]     Audit-Id: 9fe273cd-df84-45f7-a61c-a392f9661ba8
	I0731 23:59:09.743943    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:09.743943    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:09.743943    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:09.745935    9020 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2124"},"items":[{"metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15604 chars]
	I0731 23:59:09.746890    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:59:09.746963    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:59:09.746963    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:59:09.746963    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:59:09.746963    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:59:09.746963    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:59:09.746963    9020 node_conditions.go:105] duration metric: took 165.1393ms to run NodePressure ...
	I0731 23:59:09.746963    9020 start.go:241] waiting for startup goroutines ...
	I0731 23:59:09.747109    9020 start.go:255] writing updated cluster config ...
	I0731 23:59:09.751636    9020 out.go:177] 
	I0731 23:59:09.754893    9020 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:59:09.768181    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:59:09.768414    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:59:09.774883    9020 out.go:177] * Starting "multinode-411400-m03" worker node in "multinode-411400" cluster
	I0731 23:59:09.777435    9020 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:59:09.777435    9020 cache.go:56] Caching tarball of preloaded images
	I0731 23:59:09.777435    9020 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 23:59:09.777435    9020 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 23:59:09.778563    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:59:09.785779    9020 start.go:360] acquireMachinesLock for multinode-411400-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:59:09.786203    9020 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-411400-m03"
	I0731 23:59:09.786468    9020 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:59:09.786468    9020 fix.go:54] fixHost starting: m03
	I0731 23:59:09.786767    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m03 ).state
	I0731 23:59:11.892899    9020 main.go:141] libmachine: [stdout =====>] : Off
	
	I0731 23:59:11.892899    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:11.892899    9020 fix.go:112] recreateIfNeeded on multinode-411400-m03: state=Stopped err=<nil>
	W0731 23:59:11.892899    9020 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:59:11.898823    9020 out.go:177] * Restarting existing hyperv VM for "multinode-411400-m03" ...
	I0731 23:59:11.902632    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-411400-m03
	I0731 23:59:14.947331    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:59:14.947331    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:14.947677    9020 main.go:141] libmachine: Waiting for host to start...
	I0731 23:59:14.947677    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m03 ).state
	I0731 23:59:17.260444    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:59:17.260444    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:17.261240    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 23:59:19.714817    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:59:19.715421    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:20.728042    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m03 ).state
	I0731 23:59:22.905684    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:59:22.905684    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:22.906353    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-411400" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-411400
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-411400: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-411400" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-411400	172.17.20.56
multinode-411400-m02	172.17.28.42
multinode-411400-m03	172.17.16.77

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-411400 -n multinode-411400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-411400 -n multinode-411400: (12.2306769s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 logs -n 25: (8.7584804s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-411400 cp testdata\cp-test.txt                                                                                 | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:44 UTC | 31 Jul 24 23:44 UTC |
	|         | multinode-411400-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n                                                                                                  | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:44 UTC | 31 Jul 24 23:44 UTC |
	|         | multinode-411400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt                                                        | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:44 UTC | 31 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1438759977\001\cp-test_multinode-411400-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n                                                                                                  | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:44 UTC | 31 Jul 24 23:44 UTC |
	|         | multinode-411400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt                                                        | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:44 UTC | 31 Jul 24 23:45 UTC |
	|         | multinode-411400:/home/docker/cp-test_multinode-411400-m02_multinode-411400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n                                                                                                  | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:45 UTC | 31 Jul 24 23:45 UTC |
	|         | multinode-411400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n multinode-411400 sudo cat                                                                        | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:45 UTC | 31 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_multinode-411400-m02_multinode-411400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt                                                        | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:45 UTC | 31 Jul 24 23:45 UTC |
	|         | multinode-411400-m03:/home/docker/cp-test_multinode-411400-m02_multinode-411400-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n                                                                                                  | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:45 UTC | 31 Jul 24 23:45 UTC |
	|         | multinode-411400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n multinode-411400-m03 sudo cat                                                                    | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:45 UTC | 31 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_multinode-411400-m02_multinode-411400-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-411400 cp testdata\cp-test.txt                                                                                 | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:45 UTC | 31 Jul 24 23:46 UTC |
	|         | multinode-411400-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n                                                                                                  | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:46 UTC | 31 Jul 24 23:46 UTC |
	|         | multinode-411400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt                                                        | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:46 UTC | 31 Jul 24 23:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1438759977\001\cp-test_multinode-411400-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n                                                                                                  | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:46 UTC | 31 Jul 24 23:46 UTC |
	|         | multinode-411400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt                                                        | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:46 UTC | 31 Jul 24 23:46 UTC |
	|         | multinode-411400:/home/docker/cp-test_multinode-411400-m03_multinode-411400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n                                                                                                  | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:46 UTC | 31 Jul 24 23:46 UTC |
	|         | multinode-411400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n multinode-411400 sudo cat                                                                        | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:46 UTC | 31 Jul 24 23:47 UTC |
	|         | /home/docker/cp-test_multinode-411400-m03_multinode-411400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt                                                        | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:47 UTC | 31 Jul 24 23:47 UTC |
	|         | multinode-411400-m02:/home/docker/cp-test_multinode-411400-m03_multinode-411400-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n                                                                                                  | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:47 UTC | 31 Jul 24 23:47 UTC |
	|         | multinode-411400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-411400 ssh -n multinode-411400-m02 sudo cat                                                                    | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:47 UTC | 31 Jul 24 23:47 UTC |
	|         | /home/docker/cp-test_multinode-411400-m03_multinode-411400-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-411400 node stop m03                                                                                           | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:47 UTC | 31 Jul 24 23:48 UTC |
	| node    | multinode-411400 node start                                                                                              | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:48 UTC | 31 Jul 24 23:51 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-411400                                                                                                 | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:52 UTC |                     |
	| stop    | -p multinode-411400                                                                                                      | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:52 UTC | 31 Jul 24 23:53 UTC |
	| start   | -p multinode-411400                                                                                                      | multinode-411400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 23:53 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:53:48
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:53:48.303046    9020 out.go:291] Setting OutFile to fd 1580 ...
	I0731 23:53:48.304668    9020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:53:48.304668    9020 out.go:304] Setting ErrFile to fd 1560...
	I0731 23:53:48.304668    9020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:53:48.333709    9020 out.go:298] Setting JSON to false
	I0731 23:53:48.337778    9020 start.go:129] hostinfo: {"hostname":"minikube6","uptime":545969,"bootTime":1721924058,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 23:53:48.337778    9020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 23:53:48.432273    9020 out.go:177] * [multinode-411400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 23:53:48.505561    9020 notify.go:220] Checking for updates...
	I0731 23:53:48.580031    9020 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:53:48.718155    9020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:53:48.761871    9020 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 23:53:48.855845    9020 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:53:49.014279    9020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:53:49.024644    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:53:49.024644    9020 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:53:54.504269    9020 out.go:177] * Using the hyperv driver based on existing profile
	I0731 23:53:54.560422    9020 start.go:297] selected driver: hyperv
	I0731 23:53:54.561486    9020 start.go:901] validating driver "hyperv" against &{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:53:54.561831    9020 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:53:54.615188    9020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:53:54.615188    9020 cni.go:84] Creating CNI manager for ""
	I0731 23:53:54.615188    9020 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:53:54.615707    9020 start.go:340] cluster config:
	{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.20.56 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fa
lse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:53:54.615820    9020 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:53:54.747039    9020 out.go:177] * Starting "multinode-411400" primary control-plane node in "multinode-411400" cluster
	I0731 23:53:54.805142    9020 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:53:54.805833    9020 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 23:53:54.806321    9020 cache.go:56] Caching tarball of preloaded images
	I0731 23:53:54.806474    9020 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 23:53:54.807145    9020 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 23:53:54.807221    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:53:54.810284    9020 start.go:360] acquireMachinesLock for multinode-411400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:53:54.810387    9020 start.go:364] duration metric: took 102.2µs to acquireMachinesLock for "multinode-411400"
	I0731 23:53:54.810685    9020 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:53:54.810851    9020 fix.go:54] fixHost starting: 
	I0731 23:53:54.812038    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:53:57.486356    9020 main.go:141] libmachine: [stdout =====>] : Off
	
	I0731 23:53:57.486356    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:53:57.486356    9020 fix.go:112] recreateIfNeeded on multinode-411400: state=Stopped err=<nil>
	W0731 23:53:57.486356    9020 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:53:57.491929    9020 out.go:177] * Restarting existing hyperv VM for "multinode-411400" ...
	I0731 23:53:57.495953    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-411400
	I0731 23:54:00.411013    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:00.411616    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:00.411616    9020 main.go:141] libmachine: Waiting for host to start...
	I0731 23:54:00.411616    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:02.547435    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:02.547822    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:02.547943    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:04.966261    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:04.966261    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:05.973127    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:08.153089    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:08.153925    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:08.154053    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:10.585269    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:10.585269    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:11.600141    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:13.678015    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:13.678086    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:13.678086    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:16.097463    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:16.097463    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:17.108434    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:19.228962    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:19.229129    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:19.229129    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:21.666842    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:54:21.667179    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:22.678852    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:24.819658    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:24.820787    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:24.820787    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:27.245507    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:27.245616    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:27.248491    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:29.306823    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:29.306823    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:29.307614    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:31.698157    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:31.698939    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:31.698939    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:54:31.701792    9020 machine.go:94] provisionDockerMachine start ...
	I0731 23:54:31.701792    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:33.681441    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:33.681441    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:33.682528    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:36.061380    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:36.061380    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:36.066983    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:54:36.067662    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:54:36.067662    9020 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:54:36.194168    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 23:54:36.194168    9020 buildroot.go:166] provisioning hostname "multinode-411400"
	I0731 23:54:36.194168    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:38.196247    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:38.196247    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:38.196808    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:40.590494    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:40.591287    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:40.596466    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:54:40.597009    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:54:40.597233    9020 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-411400 && echo "multinode-411400" | sudo tee /etc/hostname
	I0731 23:54:40.738821    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-411400
	
	I0731 23:54:40.738917    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:42.751663    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:42.752048    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:42.752048    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:45.123560    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:45.124329    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:45.130270    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:54:45.130811    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:54:45.130811    9020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-411400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-411400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-411400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:54:45.266946    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:54:45.267008    9020 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 23:54:45.267065    9020 buildroot.go:174] setting up certificates
	I0731 23:54:45.267065    9020 provision.go:84] configureAuth start
	I0731 23:54:45.267149    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:47.285647    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:47.285647    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:47.286422    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:49.741804    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:49.742765    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:49.742765    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:51.757854    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:51.758198    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:51.758198    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:54.169048    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:54.169048    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:54.169048    9020 provision.go:143] copyHostCerts
	I0731 23:54:54.169048    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 23:54:54.169838    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 23:54:54.169931    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 23:54:54.170209    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 23:54:54.172171    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 23:54:54.172325    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 23:54:54.172325    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 23:54:54.172961    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 23:54:54.174445    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 23:54:54.174445    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 23:54:54.174445    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 23:54:54.175276    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 23:54:54.176022    9020 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-411400 san=[127.0.0.1 172.17.27.27 localhost minikube multinode-411400]
	I0731 23:54:54.288634    9020 provision.go:177] copyRemoteCerts
	I0731 23:54:54.298591    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:54:54.298591    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:54:56.295797    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:54:56.296599    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:56.296685    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:54:58.692627    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:54:58.692627    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:54:58.692871    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:54:58.793672    9020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4950244s)
	I0731 23:54:58.793832    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 23:54:58.794375    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 23:54:58.843786    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 23:54:58.844781    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 23:54:58.886908    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 23:54:58.886908    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 23:54:58.937725    9020 provision.go:87] duration metric: took 13.670417s to configureAuth
	I0731 23:54:58.937790    9020 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:54:58.938698    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:54:58.938867    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:00.979002    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:00.979246    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:00.979314    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:03.404415    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:03.404603    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:03.409176    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:03.410037    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:03.410037    9020 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 23:55:03.533840    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 23:55:03.533955    9020 buildroot.go:70] root file system type: tmpfs
	I0731 23:55:03.534148    9020 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 23:55:03.534224    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:05.575663    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:05.575663    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:05.576328    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:07.999632    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:07.999696    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:08.004983    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:08.005048    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:08.005589    9020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 23:55:08.174742    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 23:55:08.174924    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:10.215243    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:10.216337    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:10.216442    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:12.699027    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:12.699027    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:12.705182    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:12.705364    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:12.705902    9020 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 23:55:15.163334    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 23:55:15.163334    9020 machine.go:97] duration metric: took 43.4609895s to provisionDockerMachine
	I0731 23:55:15.164008    9020 start.go:293] postStartSetup for "multinode-411400" (driver="hyperv")
	I0731 23:55:15.164008    9020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:55:15.175663    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:55:15.175663    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:17.215217    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:17.215722    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:17.216004    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:19.651516    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:19.652075    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:19.652528    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:55:19.756942    9020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5812209s)
	I0731 23:55:19.769764    9020 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:55:19.776813    9020 command_runner.go:130] > NAME=Buildroot
	I0731 23:55:19.776977    9020 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 23:55:19.776977    9020 command_runner.go:130] > ID=buildroot
	I0731 23:55:19.776977    9020 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 23:55:19.776977    9020 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 23:55:19.777089    9020 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:55:19.777089    9020 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 23:55:19.777626    9020 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 23:55:19.778757    9020 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 23:55:19.778833    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 23:55:19.789315    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:55:19.806910    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 23:55:19.851172    9020 start.go:296] duration metric: took 4.6871052s for postStartSetup
	I0731 23:55:19.851301    9020 fix.go:56] duration metric: took 1m25.0393722s for fixHost
	I0731 23:55:19.851386    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:21.943769    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:21.943769    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:21.944333    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:24.386522    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:24.386730    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:24.391916    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:24.392597    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:24.392597    9020 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:55:24.507809    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722470124.528439864
	
	I0731 23:55:24.507809    9020 fix.go:216] guest clock: 1722470124.528439864
	I0731 23:55:24.507809    9020 fix.go:229] Guest: 2024-07-31 23:55:24.528439864 +0000 UTC Remote: 2024-07-31 23:55:19.8513011 +0000 UTC m=+91.814596601 (delta=4.677138764s)
	I0731 23:55:24.507809    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:26.612699    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:26.612699    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:26.613451    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:29.092016    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:29.092290    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:29.098597    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:55:29.099410    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.27.27 22 <nil> <nil>}
	I0731 23:55:29.099410    9020 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722470124
	I0731 23:55:29.240977    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 23:55:24 UTC 2024
	
	I0731 23:55:29.240977    9020 fix.go:236] clock set: Wed Jul 31 23:55:24 UTC 2024
	 (err=<nil>)
	I0731 23:55:29.240977    9020 start.go:83] releasing machines lock for "multinode-411400", held for 1m34.4293944s
	I0731 23:55:29.240977    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:31.372781    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:31.373906    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:31.373906    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:33.861029    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:33.862019    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:33.866263    9020 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 23:55:33.866359    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:33.876325    9020 ssh_runner.go:195] Run: cat /version.json
	I0731 23:55:33.876325    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:55:35.994656    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:35.994656    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:35.994815    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:35.994815    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:55:35.994815    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:35.994815    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:55:38.571494    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:38.572480    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:38.572480    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:55:38.596355    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:55:38.596407    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:55:38.596871    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:55:38.659327    9020 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0731 23:55:38.659327    9020 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.7930034s)
	W0731 23:55:38.659327    9020 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 23:55:38.691780    9020 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 23:55:38.691780    9020 ssh_runner.go:235] Completed: cat /version.json: (4.8153931s)
	I0731 23:55:38.703501    9020 ssh_runner.go:195] Run: systemctl --version
	I0731 23:55:38.712127    9020 command_runner.go:130] > systemd 252 (252)
	I0731 23:55:38.712427    9020 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 23:55:38.726256    9020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:55:38.734795    9020 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 23:55:38.735416    9020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:55:38.748043    9020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:55:38.776538    9020 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0731 23:55:38.776802    9020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:55:38.776854    9020 start.go:495] detecting cgroup driver to use...
	I0731 23:55:38.776854    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0731 23:55:38.795453    9020 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 23:55:38.795453    9020 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 23:55:38.811536    9020 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0731 23:55:38.824082    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 23:55:38.855355    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 23:55:38.874587    9020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 23:55:38.886896    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 23:55:38.918370    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:55:38.952490    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 23:55:38.984140    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:55:39.014888    9020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:55:39.045416    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 23:55:39.075835    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 23:55:39.105592    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 23:55:39.136884    9020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:55:39.156637    9020 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 23:55:39.168320    9020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:55:39.196791    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:39.391472    9020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 23:55:39.423944    9020 start.go:495] detecting cgroup driver to use...
	I0731 23:55:39.435390    9020 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 23:55:39.458952    9020 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0731 23:55:39.458952    9020 command_runner.go:130] > [Unit]
	I0731 23:55:39.458952    9020 command_runner.go:130] > Description=Docker Application Container Engine
	I0731 23:55:39.458952    9020 command_runner.go:130] > Documentation=https://docs.docker.com
	I0731 23:55:39.458952    9020 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0731 23:55:39.458952    9020 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0731 23:55:39.458952    9020 command_runner.go:130] > StartLimitBurst=3
	I0731 23:55:39.458952    9020 command_runner.go:130] > StartLimitIntervalSec=60
	I0731 23:55:39.458952    9020 command_runner.go:130] > [Service]
	I0731 23:55:39.458952    9020 command_runner.go:130] > Type=notify
	I0731 23:55:39.458952    9020 command_runner.go:130] > Restart=on-failure
	I0731 23:55:39.458952    9020 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0731 23:55:39.458952    9020 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0731 23:55:39.458952    9020 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0731 23:55:39.459520    9020 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0731 23:55:39.459520    9020 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0731 23:55:39.459599    9020 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0731 23:55:39.459599    9020 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0731 23:55:39.459599    9020 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0731 23:55:39.459599    9020 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0731 23:55:39.459599    9020 command_runner.go:130] > ExecStart=
	I0731 23:55:39.459599    9020 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0731 23:55:39.459599    9020 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0731 23:55:39.459599    9020 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0731 23:55:39.459599    9020 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0731 23:55:39.459599    9020 command_runner.go:130] > LimitNOFILE=infinity
	I0731 23:55:39.459599    9020 command_runner.go:130] > LimitNPROC=infinity
	I0731 23:55:39.459599    9020 command_runner.go:130] > LimitCORE=infinity
	I0731 23:55:39.459599    9020 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0731 23:55:39.459599    9020 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0731 23:55:39.459599    9020 command_runner.go:130] > TasksMax=infinity
	I0731 23:55:39.459599    9020 command_runner.go:130] > TimeoutStartSec=0
	I0731 23:55:39.459599    9020 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0731 23:55:39.459599    9020 command_runner.go:130] > Delegate=yes
	I0731 23:55:39.459599    9020 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0731 23:55:39.459599    9020 command_runner.go:130] > KillMode=process
	I0731 23:55:39.459599    9020 command_runner.go:130] > [Install]
	I0731 23:55:39.459599    9020 command_runner.go:130] > WantedBy=multi-user.target
	I0731 23:55:39.472871    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:55:39.502315    9020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:55:39.555548    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:55:39.593597    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:55:39.625818    9020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 23:55:39.695261    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:55:39.716485    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:55:39.750433    9020 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0731 23:55:39.763453    9020 ssh_runner.go:195] Run: which cri-dockerd
	I0731 23:55:39.768931    9020 command_runner.go:130] > /usr/bin/cri-dockerd
	I0731 23:55:39.781103    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 23:55:39.798686    9020 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 23:55:39.838571    9020 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 23:55:40.025101    9020 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 23:55:40.184564    9020 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 23:55:40.184564    9020 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 23:55:40.225612    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:40.404240    9020 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 23:55:43.058791    9020 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.654378s)
	I0731 23:55:43.069789    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 23:55:43.104525    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:55:43.139825    9020 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 23:55:43.312524    9020 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 23:55:43.476549    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:43.640784    9020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 23:55:43.677549    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:55:43.713360    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:43.889437    9020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 23:55:43.982972    9020 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 23:55:43.994548    9020 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 23:55:44.003307    9020 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0731 23:55:44.003380    9020 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 23:55:44.003380    9020 command_runner.go:130] > Device: 0,22	Inode: 865         Links: 1
	I0731 23:55:44.003380    9020 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0731 23:55:44.003380    9020 command_runner.go:130] > Access: 2024-07-31 23:55:43.930593337 +0000
	I0731 23:55:44.003380    9020 command_runner.go:130] > Modify: 2024-07-31 23:55:43.930593337 +0000
	I0731 23:55:44.003380    9020 command_runner.go:130] > Change: 2024-07-31 23:55:43.933593361 +0000
	I0731 23:55:44.003380    9020 command_runner.go:130] >  Birth: -
	I0731 23:55:44.003451    9020 start.go:563] Will wait 60s for crictl version
	I0731 23:55:44.015556    9020 ssh_runner.go:195] Run: which crictl
	I0731 23:55:44.021044    9020 command_runner.go:130] > /usr/bin/crictl
	I0731 23:55:44.031791    9020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:55:44.091714    9020 command_runner.go:130] > Version:  0.1.0
	I0731 23:55:44.091881    9020 command_runner.go:130] > RuntimeName:  docker
	I0731 23:55:44.091881    9020 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0731 23:55:44.091881    9020 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 23:55:44.091963    9020 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 23:55:44.100288    9020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:55:44.133383    9020 command_runner.go:130] > 27.1.1
	I0731 23:55:44.143490    9020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:55:44.170758    9020 command_runner.go:130] > 27.1.1
	I0731 23:55:44.175213    9020 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 23:55:44.175808    9020 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 23:55:44.179569    9020 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 23:55:44.179569    9020 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 23:55:44.179569    9020 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 23:55:44.179569    9020 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 23:55:44.182638    9020 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 23:55:44.182638    9020 ip.go:210] interface addr: 172.17.16.1/20
	I0731 23:55:44.192311    9020 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 23:55:44.198398    9020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:55:44.218156    9020 kubeadm.go:883] updating cluster {Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.27.27 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:55:44.218463    9020 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:55:44.228577    9020 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 23:55:44.253863    9020 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0731 23:55:44.253863    9020 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0731 23:55:44.254941    9020 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0731 23:55:44.254941    9020 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0731 23:55:44.254941    9020 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0731 23:55:44.254941    9020 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0731 23:55:44.255002    9020 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0731 23:55:44.255032    9020 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0731 23:55:44.255032    9020 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:55:44.255100    9020 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0731 23:55:44.255214    9020 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0731 23:55:44.255273    9020 docker.go:615] Images already preloaded, skipping extraction
	I0731 23:55:44.266878    9020 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 23:55:44.291889    9020 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0731 23:55:44.291889    9020 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0731 23:55:44.292521    9020 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0731 23:55:44.292521    9020 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:55:44.292521    9020 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0731 23:55:44.292632    9020 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0731 23:55:44.292632    9020 cache_images.go:84] Images are preloaded, skipping loading
	I0731 23:55:44.292632    9020 kubeadm.go:934] updating node { 172.17.27.27 8443 v1.30.3 docker true true} ...
	I0731 23:55:44.292632    9020 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-411400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.27.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:55:44.302505    9020 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 23:55:44.368959    9020 command_runner.go:130] > cgroupfs
	I0731 23:55:44.369197    9020 cni.go:84] Creating CNI manager for ""
	I0731 23:55:44.369303    9020 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:55:44.369303    9020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:55:44.369383    9020 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.27.27 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-411400 NodeName:multinode-411400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.27.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.27.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 23:55:44.369690    9020 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.27.27
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-411400"
	  kubeletExtraArgs:
	    node-ip: 172.17.27.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.27.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:55:44.381566    9020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:55:44.399701    9020 command_runner.go:130] > kubeadm
	I0731 23:55:44.400453    9020 command_runner.go:130] > kubectl
	I0731 23:55:44.400453    9020 command_runner.go:130] > kubelet
	I0731 23:55:44.400524    9020 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:55:44.414990    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:55:44.433566    9020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0731 23:55:44.464697    9020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:55:44.492119    9020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0731 23:55:44.536267    9020 ssh_runner.go:195] Run: grep 172.17.27.27	control-plane.minikube.internal$ /etc/hosts
	I0731 23:55:44.542451    9020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.27.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:55:44.573227    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:44.751717    9020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:55:44.777678    9020 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400 for IP: 172.17.27.27
	I0731 23:55:44.777678    9020 certs.go:194] generating shared ca certs ...
	I0731 23:55:44.777678    9020 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:44.778450    9020 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 23:55:44.778975    9020 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 23:55:44.779188    9020 certs.go:256] generating profile certs ...
	I0731 23:55:44.780202    9020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\client.key
	I0731 23:55:44.780365    9020 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.08a904b3
	I0731 23:55:44.780516    9020 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.08a904b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.27.27]
	I0731 23:55:45.252832    9020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.08a904b3 ...
	I0731 23:55:45.252832    9020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.08a904b3: {Name:mkdb51b0d280536affe66ab51b6a08832fa60b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:45.254830    9020 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.08a904b3 ...
	I0731 23:55:45.254830    9020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.08a904b3: {Name:mk841b7da4da410d1e8b99278113a65ffb8f6558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:45.255329    9020 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt.08a904b3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt
	I0731 23:55:45.269260    9020 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key.08a904b3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key
	I0731 23:55:45.271064    9020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key
	I0731 23:55:45.271064    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 23:55:45.271064    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 23:55:45.271064    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 23:55:45.271064    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 23:55:45.271699    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 23:55:45.271769    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 23:55:45.271769    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 23:55:45.272357    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 23:55:45.272357    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 23:55:45.273102    9020 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 23:55:45.273212    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 23:55:45.273302    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 23:55:45.273302    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 23:55:45.273936    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 23:55:45.273936    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 23:55:45.274632    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.274762    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 23:55:45.274762    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.276189    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:55:45.323679    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:55:45.368641    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:55:45.417472    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 23:55:45.458469    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 23:55:45.498699    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 23:55:45.544197    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:55:45.587523    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 23:55:45.635674    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 23:55:45.680937    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 23:55:45.726057    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:55:45.768047    9020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:55:45.815396    9020 ssh_runner.go:195] Run: openssl version
	I0731 23:55:45.823403    9020 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 23:55:45.834951    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:55:45.866428    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.872431    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.872431    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.883641    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:55:45.891747    9020 command_runner.go:130] > b5213941
	I0731 23:55:45.903141    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:55:45.933044    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 23:55:45.963641    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.970884    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.971048    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.982335    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 23:55:45.993114    9020 command_runner.go:130] > 51391683
	I0731 23:55:46.005115    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 23:55:46.038924    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 23:55:46.073403    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 23:55:46.079598    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:55:46.079598    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:55:46.091216    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 23:55:46.102142    9020 command_runner.go:130] > 3ec20f2e
	I0731 23:55:46.114740    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:55:46.145487    9020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:55:46.152576    9020 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:55:46.152576    9020 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 23:55:46.152576    9020 command_runner.go:130] > Device: 8,1	Inode: 531538      Links: 1
	I0731 23:55:46.152576    9020 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:55:46.152576    9020 command_runner.go:130] > Access: 2024-07-31 23:32:14.297746386 +0000
	I0731 23:55:46.152576    9020 command_runner.go:130] > Modify: 2024-07-31 23:32:14.297746386 +0000
	I0731 23:55:46.152576    9020 command_runner.go:130] > Change: 2024-07-31 23:32:14.297746386 +0000
	I0731 23:55:46.152576    9020 command_runner.go:130] >  Birth: 2024-07-31 23:32:14.297746386 +0000
	I0731 23:55:46.162891    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 23:55:46.172003    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.183721    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 23:55:46.191931    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.202382    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 23:55:46.211030    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.221288    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 23:55:46.229966    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.242422    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 23:55:46.251654    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.263249    9020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 23:55:46.271755    9020 command_runner.go:130] > Certificate will not expire
	I0731 23:55:46.272350    9020 kubeadm.go:392] StartCluster: {Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.27.27 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.28.42 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:55:46.282008    9020 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 23:55:46.316616    9020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 23:55:46.332873    9020 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0731 23:55:46.332873    9020 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0731 23:55:46.332873    9020 command_runner.go:130] > /var/lib/minikube/etcd:
	I0731 23:55:46.332873    9020 command_runner.go:130] > member
	I0731 23:55:46.332873    9020 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 23:55:46.332873    9020 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 23:55:46.345216    9020 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 23:55:46.362659    9020 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 23:55:46.363845    9020 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-411400" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:55:46.364542    9020 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-411400" cluster setting kubeconfig missing "multinode-411400" context setting]
	I0731 23:55:46.365162    9020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:46.381838    9020 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:55:46.382503    9020 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.27.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:55:46.384012    9020 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 23:55:46.394668    9020 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 23:55:46.411599    9020 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0731 23:55:46.411681    9020 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0731 23:55:46.411681    9020 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0731 23:55:46.411681    9020 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0731 23:55:46.411681    9020 command_runner.go:130] >  kind: InitConfiguration
	I0731 23:55:46.411681    9020 command_runner.go:130] >  localAPIEndpoint:
	I0731 23:55:46.411681    9020 command_runner.go:130] > -  advertiseAddress: 172.17.20.56
	I0731 23:55:46.411681    9020 command_runner.go:130] > +  advertiseAddress: 172.17.27.27
	I0731 23:55:46.411681    9020 command_runner.go:130] >    bindPort: 8443
	I0731 23:55:46.411681    9020 command_runner.go:130] >  bootstrapTokens:
	I0731 23:55:46.411757    9020 command_runner.go:130] >    - groups:
	I0731 23:55:46.411757    9020 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0731 23:55:46.411797    9020 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0731 23:55:46.411797    9020 command_runner.go:130] >    name: "multinode-411400"
	I0731 23:55:46.411797    9020 command_runner.go:130] >    kubeletExtraArgs:
	I0731 23:55:46.411797    9020 command_runner.go:130] > -    node-ip: 172.17.20.56
	I0731 23:55:46.411830    9020 command_runner.go:130] > +    node-ip: 172.17.27.27
	I0731 23:55:46.411830    9020 command_runner.go:130] >    taints: []
	I0731 23:55:46.411830    9020 command_runner.go:130] >  ---
	I0731 23:55:46.411830    9020 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0731 23:55:46.411830    9020 command_runner.go:130] >  kind: ClusterConfiguration
	I0731 23:55:46.411830    9020 command_runner.go:130] >  apiServer:
	I0731 23:55:46.411830    9020 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.20.56"]
	I0731 23:55:46.411830    9020 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.27.27"]
	I0731 23:55:46.411830    9020 command_runner.go:130] >    extraArgs:
	I0731 23:55:46.411830    9020 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0731 23:55:46.411830    9020 command_runner.go:130] >  controllerManager:
	I0731 23:55:46.411830    9020 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.20.56
	+  advertiseAddress: 172.17.27.27
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-411400"
	   kubeletExtraArgs:
	-    node-ip: 172.17.20.56
	+    node-ip: 172.17.27.27
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.20.56"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.27.27"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0731 23:55:46.411830    9020 kubeadm.go:1160] stopping kube-system containers ...
	I0731 23:55:46.421547    9020 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 23:55:46.448123    9020 command_runner.go:130] > 378f2a659316
	I0731 23:55:46.448123    9020 command_runner.go:130] > 7a9f5c5f9957
	I0731 23:55:46.448123    9020 command_runner.go:130] > 1d63a0cb77d5
	I0731 23:55:46.448191    9020 command_runner.go:130] > 8da81f74292e
	I0731 23:55:46.448191    9020 command_runner.go:130] > 284902a3378a
	I0731 23:55:46.448191    9020 command_runner.go:130] > 07b42ba54367
	I0731 23:55:46.448191    9020 command_runner.go:130] > 0ae3ab4f2984
	I0731 23:55:46.448191    9020 command_runner.go:130] > 7c2aeeb2eba1
	I0731 23:55:46.448191    9020 command_runner.go:130] > 534fd9010fca
	I0731 23:55:46.448191    9020 command_runner.go:130] > 945a9963cd1c
	I0731 23:55:46.448277    9020 command_runner.go:130] > 54a3651cfe8b
	I0731 23:55:46.448277    9020 command_runner.go:130] > 6ce3944d7d13
	I0731 23:55:46.448277    9020 command_runner.go:130] > 78312ba260a7
	I0731 23:55:46.448277    9020 command_runner.go:130] > 785da79d42d7
	I0731 23:55:46.448277    9020 command_runner.go:130] > 74068ed5155b
	I0731 23:55:46.448330    9020 command_runner.go:130] > 68e7a182b5fc
	I0731 23:55:46.448355    9020 docker.go:483] Stopping containers: [378f2a659316 7a9f5c5f9957 1d63a0cb77d5 8da81f74292e 284902a3378a 07b42ba54367 0ae3ab4f2984 7c2aeeb2eba1 534fd9010fca 945a9963cd1c 54a3651cfe8b 6ce3944d7d13 78312ba260a7 785da79d42d7 74068ed5155b 68e7a182b5fc]
	I0731 23:55:46.457162    9020 ssh_runner.go:195] Run: docker stop 378f2a659316 7a9f5c5f9957 1d63a0cb77d5 8da81f74292e 284902a3378a 07b42ba54367 0ae3ab4f2984 7c2aeeb2eba1 534fd9010fca 945a9963cd1c 54a3651cfe8b 6ce3944d7d13 78312ba260a7 785da79d42d7 74068ed5155b 68e7a182b5fc
	I0731 23:55:46.480184    9020 command_runner.go:130] > 378f2a659316
	I0731 23:55:46.480184    9020 command_runner.go:130] > 7a9f5c5f9957
	I0731 23:55:46.480184    9020 command_runner.go:130] > 1d63a0cb77d5
	I0731 23:55:46.480184    9020 command_runner.go:130] > 8da81f74292e
	I0731 23:55:46.480184    9020 command_runner.go:130] > 284902a3378a
	I0731 23:55:46.480184    9020 command_runner.go:130] > 07b42ba54367
	I0731 23:55:46.480184    9020 command_runner.go:130] > 0ae3ab4f2984
	I0731 23:55:46.480184    9020 command_runner.go:130] > 7c2aeeb2eba1
	I0731 23:55:46.480184    9020 command_runner.go:130] > 534fd9010fca
	I0731 23:55:46.480285    9020 command_runner.go:130] > 945a9963cd1c
	I0731 23:55:46.480285    9020 command_runner.go:130] > 54a3651cfe8b
	I0731 23:55:46.480285    9020 command_runner.go:130] > 6ce3944d7d13
	I0731 23:55:46.480285    9020 command_runner.go:130] > 78312ba260a7
	I0731 23:55:46.480285    9020 command_runner.go:130] > 785da79d42d7
	I0731 23:55:46.480285    9020 command_runner.go:130] > 74068ed5155b
	I0731 23:55:46.480285    9020 command_runner.go:130] > 68e7a182b5fc
	I0731 23:55:46.491781    9020 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 23:55:46.531053    9020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:55:46.547690    9020 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0731 23:55:46.547690    9020 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0731 23:55:46.547690    9020 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0731 23:55:46.547690    9020 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:55:46.548680    9020 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:55:46.548680    9020 kubeadm.go:157] found existing configuration files:
	
	I0731 23:55:46.559565    9020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:55:46.574187    9020 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:55:46.574187    9020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:55:46.586381    9020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:55:46.610757    9020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:55:46.626460    9020 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:55:46.626902    9020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:55:46.638392    9020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:55:46.664735    9020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:55:46.679802    9020 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:55:46.679802    9020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:55:46.691235    9020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:55:46.718843    9020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:55:46.736165    9020 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:55:46.736932    9020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:55:46.747610    9020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:55:46.776526    9020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 23:55:46.793926    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:47.048376    9020 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 23:55:47.049192    9020 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0731 23:55:47.049395    9020 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0731 23:55:47.049643    9020 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 23:55:47.050392    9020 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0731 23:55:47.051417    9020 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0731 23:55:47.058267    9020 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0731 23:55:47.059354    9020 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0731 23:55:47.059576    9020 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0731 23:55:47.060584    9020 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 23:55:47.060584    9020 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 23:55:47.061813    9020 command_runner.go:130] > [certs] Using the existing "sa" key
	I0731 23:55:47.064425    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:48.464886    9020 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 23:55:48.465420    9020 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 23:55:48.465482    9020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4009568s)
	I0731 23:55:48.465546    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:48.734790    9020 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:55:48.734790    9020 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:55:48.734893    9020 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 23:55:48.734950    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:48.838950    9020 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 23:55:48.839067    9020 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 23:55:48.839067    9020 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 23:55:48.839067    9020 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 23:55:48.839130    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:48.936613    9020 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 23:55:48.936838    9020 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:55:48.948263    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:49.458718    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:49.961890    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:50.453667    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:50.960826    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:55:50.985141    9020 command_runner.go:130] > 1911
	I0731 23:55:50.985141    9020 api_server.go:72] duration metric: took 2.0483425s to wait for apiserver process to appear ...
	I0731 23:55:50.985141    9020 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:55:50.985141    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:54.018852    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 23:55:54.018852    9020 api_server.go:103] status: https://172.17.27.27:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 23:55:54.018852    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:54.129670    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 23:55:54.129741    9020 api_server.go:103] status: https://172.17.27.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 23:55:54.491399    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:54.499039    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 23:55:54.499370    9020 api_server.go:103] status: https://172.17.27.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 23:55:54.990921    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:55.008112    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 23:55:55.008112    9020 api_server.go:103] status: https://172.17.27.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 23:55:55.500426    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:55:55.511907    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 200:
	ok
	I0731 23:55:55.511907    9020 round_trippers.go:463] GET https://172.17.27.27:8443/version
	I0731 23:55:55.511907    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:55.511907    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:55.511907    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:55.532481    9020 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0731 23:55:55.532481    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Audit-Id: c00843c2-f504-4bc6-8632-e4c4028c65d5
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:55.532481    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:55.532481    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Content-Length: 263
	I0731 23:55:55.532481    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:55 GMT
	I0731 23:55:55.532481    9020 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0731 23:55:55.533583    9020 api_server.go:141] control plane version: v1.30.3
	I0731 23:55:55.533729    9020 api_server.go:131] duration metric: took 4.5485302s to wait for apiserver health ...
	I0731 23:55:55.533729    9020 cni.go:84] Creating CNI manager for ""
	I0731 23:55:55.533801    9020 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:55:55.539946    9020 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 23:55:55.555310    9020 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 23:55:55.562316    9020 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0731 23:55:55.562316    9020 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0731 23:55:55.562369    9020 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0731 23:55:55.562369    9020 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:55:55.562369    9020 command_runner.go:130] > Access: 2024-07-31 23:54:24.316709300 +0000
	I0731 23:55:55.562369    9020 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0731 23:55:55.562369    9020 command_runner.go:130] > Change: 2024-07-31 23:54:16.502000000 +0000
	I0731 23:55:55.562369    9020 command_runner.go:130] >  Birth: -
	I0731 23:55:55.562369    9020 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 23:55:55.562505    9020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 23:55:55.612607    9020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 23:55:56.954646    9020 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0731 23:55:56.955684    9020 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0731 23:55:56.955742    9020 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0731 23:55:56.955742    9020 command_runner.go:130] > daemonset.apps/kindnet configured
	I0731 23:55:56.955796    9020 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3431726s)
	I0731 23:55:56.955935    9020 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:55:56.956121    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:55:56.956185    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:56.956205    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:56.956205    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:56.962546    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:55:56.963304    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:56.963304    9020 round_trippers.go:580]     Audit-Id: e6998b80-b52b-4891-850f-f790a75abcae
	I0731 23:55:56.963304    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:56.963304    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:56.963304    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:56.963304    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:56.963304    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:56 GMT
	I0731 23:55:56.965003    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1846"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87652 chars]
	I0731 23:55:56.972636    9020 system_pods.go:59] 12 kube-system pods found
	I0731 23:55:56.972703    9020 system_pods.go:61] "coredns-7db6d8ff4d-z8gtw" [41ddb3a7-8405-49e7-88fb-41ab6278e4af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 23:55:56.972703    9020 system_pods.go:61] "etcd-multinode-411400" [4de1ad7a-3a8e-4823-9430-fadd76753763] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kindnet-bgnqq" [7bb015d3-5a3f-4be8-861c-b29fb76da15c] Running
	I0731 23:55:56.972703    9020 system_pods.go:61] "kindnet-cxs2b" [04d92937-d48a-4a21-b4ce-adb78d3cad7f] Running
	I0731 23:55:56.972703    9020 system_pods.go:61] "kindnet-j8slc" [d77d4517-d9d3-46d9-a231-1496684afe1d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-apiserver-multinode-411400" [eaabee4a-7fb0-455f-b354-3fae71ca2878] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-controller-manager-multinode-411400" [217a4087-49b2-4b74-a094-e027a51cf503] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-proxy-5j8pv" [761c8479-d25f-4142-93b6-23b0d1e3ccb7] Running
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-proxy-chdxg" [f3405391-f4cb-4ffe-8d51-d669e37d0a3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-proxy-g7tpl" [c8356e2e-b324-4001-9b82-18a13b436517] Running
	I0731 23:55:56.972703    9020 system_pods.go:61] "kube-scheduler-multinode-411400" [a10cf66c-3049-48d4-9ab1-8667efc59977] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 23:55:56.972703    9020 system_pods.go:61] "storage-provisioner" [f33ea8e6-6b88-471e-a471-d3c4faf9de93] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:55:56.972703    9020 system_pods.go:74] duration metric: took 16.7677ms to wait for pod list to return data ...
	I0731 23:55:56.972703    9020 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:55:56.972703    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes
	I0731 23:55:56.972703    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:56.972703    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:56.972703    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:56.977871    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:55:56.977871    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:56.977871    9020 round_trippers.go:580]     Audit-Id: 8003ed79-6fff-43da-9002-2ba89eb97101
	I0731 23:55:56.977871    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:56.977871    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:56.977871    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:56.977871    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:56.977871    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:56.977871    9020 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1846"},"items":[{"metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15625 chars]
	I0731 23:55:56.979881    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:55:56.979946    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:55:56.980000    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:55:56.980000    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:55:56.980000    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:55:56.980000    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:55:56.980067    9020 node_conditions.go:105] duration metric: took 7.3645ms to run NodePressure ...
	I0731 23:55:56.980067    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:55:57.330938    9020 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0731 23:55:57.330938    9020 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0731 23:55:57.330938    9020 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 23:55:57.330938    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0731 23:55:57.330938    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.330938    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.330938    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.337984    9020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 23:55:57.337984    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.337984    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.337984    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.337984    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.337984    9020 round_trippers.go:580]     Audit-Id: bb264a15-b455-4f27-a94c-ba5283e93d78
	I0731 23:55:57.337984    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.337984    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.339072    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1850"},"items":[{"metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"4de1ad7a-3a8e-4823-9430-fadd76753763","resourceVersion":"1780","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.27.27:2379","kubernetes.io/config.hash":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.mirror":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.seen":"2024-07-31T23:55:48.969840438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30501 chars]
	I0731 23:55:57.340757    9020 kubeadm.go:739] kubelet initialised
	I0731 23:55:57.340757    9020 kubeadm.go:740] duration metric: took 9.8193ms waiting for restarted kubelet to initialise ...
	I0731 23:55:57.340757    9020 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:55:57.340757    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:55:57.340757    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.340757    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.340757    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.362391    9020 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0731 23:55:57.362631    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.362631    9020 round_trippers.go:580]     Audit-Id: 0aba9c1c-8d0e-4efa-a86e-0d6848811039
	I0731 23:55:57.362631    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.362631    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.362631    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.362631    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.362631    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.364239    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1850"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87652 chars]
	I0731 23:55:57.368173    9020 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.368805    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:55:57.368896    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.368959    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.368959    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.400616    9020 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0731 23:55:57.401647    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.401647    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.401647    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.401647    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.401768    9020 round_trippers.go:580]     Audit-Id: f0288ba7-5f32-43a6-a9df-f70ff282ee55
	I0731 23:55:57.401768    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.401768    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.402052    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:55:57.402672    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:57.402728    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.402728    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.402728    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.405654    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:55:57.405654    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.405654    9020 round_trippers.go:580]     Audit-Id: e350ea03-0dfd-48b8-96e1-4e257768e241
	I0731 23:55:57.405654    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.405654    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.405654    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.405654    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.405654    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.406126    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:57.407384    9020 pod_ready.go:97] node "multinode-411400" hosting pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.407384    9020 pod_ready.go:81] duration metric: took 39.2103ms for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.407384    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.407384    9020 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.408358    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-411400
	I0731 23:55:57.410392    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.410392    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.410392    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.426418    9020 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 23:55:57.426418    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.426418    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.426418    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.426418    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.426418    9020 round_trippers.go:580]     Audit-Id: 15b2c0c2-c127-47d7-99a5-8adac4a00679
	I0731 23:55:57.426418    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.426418    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.426418    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"4de1ad7a-3a8e-4823-9430-fadd76753763","resourceVersion":"1780","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.27.27:2379","kubernetes.io/config.hash":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.mirror":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.seen":"2024-07-31T23:55:48.969840438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6373 chars]
	I0731 23:55:57.426418    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:57.426418    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.426418    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.426418    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.443381    9020 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 23:55:57.443381    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.443381    9020 round_trippers.go:580]     Audit-Id: 84d95dd4-b165-435a-93f2-6132f5b097e7
	I0731 23:55:57.443381    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.443381    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.443381    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.443381    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.443381    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.444373    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:57.444373    9020 pod_ready.go:97] node "multinode-411400" hosting pod "etcd-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.444373    9020 pod_ready.go:81] duration metric: took 36.9886ms for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.444373    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "etcd-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.444373    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.444373    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-411400
	I0731 23:55:57.444373    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.444373    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.444373    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.450360    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:55:57.450360    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.450360    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.450360    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.450360    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.450360    9020 round_trippers.go:580]     Audit-Id: 6fe51bd8-edad-4a6f-b118-53f72637772d
	I0731 23:55:57.450360    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.450360    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.451500    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-411400","namespace":"kube-system","uid":"eaabee4a-7fb0-455f-b354-3fae71ca2878","resourceVersion":"1779","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.27.27:8443","kubernetes.io/config.hash":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.mirror":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.seen":"2024-07-31T23:55:48.898321781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7929 chars]
	I0731 23:55:57.452226    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:57.452226    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.452226    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.452226    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.488708    9020 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0731 23:55:57.488708    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.488789    9020 round_trippers.go:580]     Audit-Id: 69c2cc17-db85-4e57-9eeb-1388b11ffeeb
	I0731 23:55:57.488789    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.488789    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.488789    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.488789    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.488789    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.488864    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:57.489396    9020 pod_ready.go:97] node "multinode-411400" hosting pod "kube-apiserver-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.489451    9020 pod_ready.go:81] duration metric: took 45.0771ms for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.489451    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "kube-apiserver-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.489451    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.489629    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400
	I0731 23:55:57.489629    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.489683    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.489683    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.499445    9020 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 23:55:57.499445    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.499445    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.499445    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.499445    9020 round_trippers.go:580]     Audit-Id: 90f03f14-89dd-405c-bbb9-6fa0cc871e51
	I0731 23:55:57.499445    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.499445    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.499445    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.499445    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-411400","namespace":"kube-system","uid":"217a4087-49b2-4b74-a094-e027a51cf503","resourceVersion":"1777","creationTimestamp":"2024-07-31T23:32:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.mirror":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.seen":"2024-07-31T23:32:18.716560513Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0731 23:55:57.500460    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:57.500460    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.500460    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.500460    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.502440    9020 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 23:55:57.503438    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.503488    9020 round_trippers.go:580]     Audit-Id: 0c11bfef-0f5b-4636-a368-d099a5594715
	I0731 23:55:57.503488    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.503488    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.503488    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.503488    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.503488    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.503648    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:57.504103    9020 pod_ready.go:97] node "multinode-411400" hosting pod "kube-controller-manager-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.504174    9020 pod_ready.go:81] duration metric: took 14.7231ms for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.504174    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "kube-controller-manager-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:57.504247    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.556649    9020 request.go:629] Waited for 52.1154ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:55:57.556687    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:55:57.556687    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.556687    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.556687    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.559963    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:55:57.560028    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.560028    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.560028    9020 round_trippers.go:580]     Audit-Id: 8cfd7098-49f3-4c48-a067-9e55915554af
	I0731 23:55:57.560028    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.560028    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.560028    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.560028    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.560150    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5j8pv","generateName":"kube-proxy-","namespace":"kube-system","uid":"761c8479-d25f-4142-93b6-23b0d1e3ccb7","resourceVersion":"1748","creationTimestamp":"2024-07-31T23:40:31Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0731 23:55:57.761211    9020 request.go:629] Waited for 200.0921ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:55:57.761583    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:55:57.761668    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.761722    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.761722    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.765024    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:55:57.765248    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.765248    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.765248    9020 round_trippers.go:580]     Audit-Id: 38420fbb-c1d1-4d97-ba70-d1c98e7e988e
	I0731 23:55:57.765248    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.765248    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.765248    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.765248    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.766126    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m03","uid":"3753504a-97f6-4be0-809b-ee84cbf38121","resourceVersion":"1757","creationTimestamp":"2024-07-31T23:51:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_51_16_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:51:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0731 23:55:57.766319    9020 pod_ready.go:97] node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:55:57.766319    9020 pod_ready.go:81] duration metric: took 262.0682ms for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:57.766319    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:55:57.766319    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:57.961983    9020 request.go:629] Waited for 195.4848ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:55:57.962089    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:55:57.962089    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:57.962089    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:57.962089    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:57.968955    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:55:57.969130    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:57.969130    9020 round_trippers.go:580]     Audit-Id: fcfae0dc-0f18-4fc1-9d0a-eadad8e818de
	I0731 23:55:57.969130    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:57.969130    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:57.969196    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:57.969196    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:57.969196    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:57 GMT
	I0731 23:55:57.969935    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-chdxg","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3405391-f4cb-4ffe-8d51-d669e37d0a3b","resourceVersion":"1853","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0731 23:55:58.165387    9020 request.go:629] Waited for 194.2487ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:58.165387    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:58.165387    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.165387    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.165775    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.169894    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:55:58.169894    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.169972    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.169972    9020 round_trippers.go:580]     Audit-Id: 1a0b59b8-e5fe-47e2-b3b6-64c8902c7948
	I0731 23:55:58.169972    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.169972    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.169972    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.169972    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.169972    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:58.170723    9020 pod_ready.go:97] node "multinode-411400" hosting pod "kube-proxy-chdxg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:58.170723    9020 pod_ready.go:81] duration metric: took 404.3989ms for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:58.170846    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "kube-proxy-chdxg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:58.170846    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:58.365181    9020 request.go:629] Waited for 194.0071ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:55:58.365294    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:55:58.365294    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.365294    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.365294    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.368229    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:55:58.369227    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.369285    9020 round_trippers.go:580]     Audit-Id: 5fc054b8-090a-4c44-a299-b678cf84b9f3
	I0731 23:55:58.369285    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.369285    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.369285    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.369285    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.369285    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.369285    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g7tpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"c8356e2e-b324-4001-9b82-18a13b436517","resourceVersion":"610","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0731 23:55:58.569407    9020 request.go:629] Waited for 199.2126ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:55:58.569757    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:55:58.569757    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.569757    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.569757    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.572483    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:55:58.572483    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.572483    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.572483    9020 round_trippers.go:580]     Audit-Id: cc0f5b0d-c686-4c59-bf1e-38bcafa2211d
	I0731 23:55:58.573035    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.573035    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.573035    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.573035    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.573197    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"1679","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3825 chars]
	I0731 23:55:58.574152    9020 pod_ready.go:92] pod "kube-proxy-g7tpl" in "kube-system" namespace has status "Ready":"True"
	I0731 23:55:58.574233    9020 pod_ready.go:81] duration metric: took 403.3815ms for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:58.574233    9020 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:55:58.756510    9020 request.go:629] Waited for 182.0789ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:55:58.756605    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:55:58.756605    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.756730    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.756822    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.759135    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:55:58.759135    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.759135    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.759135    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.759135    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.759135    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.759135    9020 round_trippers.go:580]     Audit-Id: 69ffc305-0ff3-488e-b8d6-30473cd7afa0
	I0731 23:55:58.760032    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.760070    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-411400","namespace":"kube-system","uid":"a10cf66c-3049-48d4-9ab1-8667efc59977","resourceVersion":"1778","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.mirror":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.seen":"2024-07-31T23:32:26.731395457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0731 23:55:58.958678    9020 request.go:629] Waited for 197.4273ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:58.958896    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:58.959053    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:58.959108    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:58.959108    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:58.962375    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:55:58.962375    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:58.962375    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:58.962375    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:58 GMT
	I0731 23:55:58.962375    9020 round_trippers.go:580]     Audit-Id: 0c27f944-9a4f-4a69-976c-2d9a07b34440
	I0731 23:55:58.962375    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:58.962375    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:58.962375    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:58.963356    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:58.963745    9020 pod_ready.go:97] node "multinode-411400" hosting pod "kube-scheduler-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:58.963825    9020 pod_ready.go:81] duration metric: took 389.5877ms for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	E0731 23:55:58.963825    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400" hosting pod "kube-scheduler-multinode-411400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400" has status "Ready":"False"
	I0731 23:55:58.963925    9020 pod_ready.go:38] duration metric: took 1.6230476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:55:58.963925    9020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 23:55:58.992745    9020 command_runner.go:130] > -16
	I0731 23:55:58.992840    9020 ops.go:34] apiserver oom_adj: -16
	I0731 23:55:58.992840    9020 kubeadm.go:597] duration metric: took 12.6598063s to restartPrimaryControlPlane
	I0731 23:55:58.992840    9020 kubeadm.go:394] duration metric: took 12.7203289s to StartCluster
	I0731 23:55:58.992903    9020 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:58.993042    9020 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:55:58.994152    9020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:55:58.996303    9020 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.27.27 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 23:55:58.996241    9020 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 23:55:58.996846    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:55:59.003558    9020 out.go:177] * Verifying Kubernetes components...
	I0731 23:55:59.010577    9020 out.go:177] * Enabled addons: 
	I0731 23:55:59.018254    9020 addons.go:510] duration metric: took 22.0124ms for enable addons: enabled=[]
	I0731 23:55:59.024220    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:55:59.317039    9020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:55:59.343141    9020 node_ready.go:35] waiting up to 6m0s for node "multinode-411400" to be "Ready" ...
	I0731 23:55:59.343478    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:59.343478    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:59.343554    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:59.343554    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:59.348356    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:55:59.348356    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:59.348356    9020 round_trippers.go:580]     Audit-Id: 1f9449f5-e768-4963-9eef-c77267852ca2
	I0731 23:55:59.348356    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:59.348356    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:59.348356    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:59.348356    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:59.348356    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:59 GMT
	I0731 23:55:59.348356    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:55:59.853348    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:55:59.853520    9020 round_trippers.go:469] Request Headers:
	I0731 23:55:59.853520    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:55:59.853520    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:55:59.856874    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:55:59.856874    9020 round_trippers.go:577] Response Headers:
	I0731 23:55:59.856874    9020 round_trippers.go:580]     Audit-Id: 022a6f7c-ac57-4ec7-ae9e-1982affd4ff9
	I0731 23:55:59.856874    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:55:59.856874    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:55:59.856874    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:55:59.856874    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:55:59.856874    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:55:59 GMT
	I0731 23:55:59.858147    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:00.355395    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:00.355489    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:00.355489    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:00.355489    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:00.358438    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:00.359470    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:00.359470    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:00 GMT
	I0731 23:56:00.359470    9020 round_trippers.go:580]     Audit-Id: 72707bce-9584-49d8-91cf-9c7fe5abf303
	I0731 23:56:00.359470    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:00.359470    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:00.359470    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:00.359470    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:00.360048    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:00.851828    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:00.851910    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:00.851910    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:00.851910    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:00.856246    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:00.856246    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:00.856246    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:00.856246    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:00.856246    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:00.856246    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:00.856461    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:00 GMT
	I0731 23:56:00.856461    9020 round_trippers.go:580]     Audit-Id: b587087e-9f30-41ae-9fb8-a8744eb0707e
	I0731 23:56:00.856945    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:01.353979    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:01.353979    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:01.353979    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:01.353979    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:01.356971    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:01.356971    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:01.356971    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:01.356971    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:01 GMT
	I0731 23:56:01.356971    9020 round_trippers.go:580]     Audit-Id: 289bbf9a-3594-4324-96a1-b8ea9fe5565d
	I0731 23:56:01.356971    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:01.356971    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:01.356971    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:01.358075    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:01.358619    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:01.854476    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:01.854476    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:01.854476    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:01.854476    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:01.870318    9020 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 23:56:01.870596    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:01.870596    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:01.870596    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:01.870596    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:01 GMT
	I0731 23:56:01.870596    9020 round_trippers.go:580]     Audit-Id: 14def455-3971-425e-a526-b0b5c7662d5b
	I0731 23:56:01.870685    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:01.870685    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:01.871078    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:02.357588    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:02.357775    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:02.357775    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:02.357775    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:02.361299    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:02.361299    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:02.361299    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:02.361299    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:02.361299    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:02.361299    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:02 GMT
	I0731 23:56:02.361299    9020 round_trippers.go:580]     Audit-Id: 7a073046-28e0-4d2b-8423-ad2c1428df82
	I0731 23:56:02.361299    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:02.361977    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:02.844662    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:02.844891    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:02.844891    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:02.844891    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:02.849611    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:02.849611    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:02.849611    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:02.849611    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:02.849611    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:02 GMT
	I0731 23:56:02.849611    9020 round_trippers.go:580]     Audit-Id: 44abbc17-1a5c-4f87-a835-2fcb1b4cadae
	I0731 23:56:02.849611    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:02.849611    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:02.850192    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:03.348381    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:03.348381    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:03.348463    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:03.348463    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:03.351729    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:03.352445    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:03.352445    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:03 GMT
	I0731 23:56:03.352445    9020 round_trippers.go:580]     Audit-Id: 73bf4412-1b44-4c22-b52e-7b2c179f97ae
	I0731 23:56:03.352445    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:03.352445    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:03.352445    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:03.352507    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:03.352507    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:03.849381    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:03.849490    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:03.849490    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:03.849490    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:03.852799    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:03.852799    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:03.852799    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:03 GMT
	I0731 23:56:03.852799    9020 round_trippers.go:580]     Audit-Id: d070cd40-4a0a-496f-91f1-419d00deb350
	I0731 23:56:03.852799    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:03.852799    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:03.852799    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:03.853519    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:03.853836    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:03.854341    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:04.348106    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:04.348106    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:04.348106    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:04.348106    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:04.354338    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:04.354433    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:04.354433    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:04.354433    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:04 GMT
	I0731 23:56:04.354503    9020 round_trippers.go:580]     Audit-Id: a207e2c2-5a7b-42c6-8cf4-56885b58d44e
	I0731 23:56:04.354503    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:04.354503    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:04.354544    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:04.354934    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:04.846436    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:04.846436    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:04.846436    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:04.846436    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:04.850822    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:04.850852    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:04.850852    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:04.850852    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:04.850852    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:04.850852    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:04.850852    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:04 GMT
	I0731 23:56:04.850852    9020 round_trippers.go:580]     Audit-Id: c860df3b-7dc7-4e48-942f-675376dbb963
	I0731 23:56:04.850852    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:05.357823    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:05.358216    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:05.358263    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:05.358263    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:05.362670    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:05.362670    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:05.362670    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:05 GMT
	I0731 23:56:05.362670    9020 round_trippers.go:580]     Audit-Id: 45c1f0c0-15a2-4d08-9cc7-8c0eb8b59e36
	I0731 23:56:05.362670    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:05.362670    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:05.363068    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:05.363068    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:05.363296    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:05.844444    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:05.844444    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:05.844444    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:05.844444    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:05.848065    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:05.848065    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:05.848065    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:05.848065    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:05.848065    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:05 GMT
	I0731 23:56:05.848065    9020 round_trippers.go:580]     Audit-Id: b12bfdde-cb19-4c09-8e51-b56b3b0255c1
	I0731 23:56:05.848065    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:05.848065    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:05.848937    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:06.357894    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:06.357972    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:06.357972    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:06.357972    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:06.361366    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:06.361366    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:06.361366    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:06 GMT
	I0731 23:56:06.361366    9020 round_trippers.go:580]     Audit-Id: 1cc6e887-1033-4265-9a2b-60fe3ff4763d
	I0731 23:56:06.361366    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:06.361366    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:06.361366    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:06.361366    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:06.362185    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1769","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0731 23:56:06.362641    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:06.846835    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:06.846835    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:06.846835    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:06.846835    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:06.849455    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:06.849696    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:06.849696    9020 round_trippers.go:580]     Audit-Id: 14cc98cf-eb34-4c03-8163-2b681a72dccb
	I0731 23:56:06.849696    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:06.849696    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:06.849696    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:06.849771    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:06.849771    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:06 GMT
	I0731 23:56:06.849927    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:07.346851    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:07.346851    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:07.346851    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:07.346851    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:07.350498    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:07.351002    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:07.351002    9020 round_trippers.go:580]     Audit-Id: 377c342d-ffec-45c3-a054-b9e9b868239c
	I0731 23:56:07.351002    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:07.351002    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:07.351002    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:07.351002    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:07.351002    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:07 GMT
	I0731 23:56:07.351002    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:07.844063    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:07.844063    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:07.844194    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:07.844194    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:07.848097    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:07.848577    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:07.848577    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:07.848577    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:07 GMT
	I0731 23:56:07.848577    9020 round_trippers.go:580]     Audit-Id: 09c14c31-88ef-47b9-b2cc-0aa755125a58
	I0731 23:56:07.848577    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:07.848577    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:07.848577    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:07.848577    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:08.345300    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:08.345642    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:08.345642    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:08.345690    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:08.348292    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:08.348350    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:08.348350    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:08.348350    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:08.348350    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:08 GMT
	I0731 23:56:08.348350    9020 round_trippers.go:580]     Audit-Id: 370ef678-fd9a-48cd-a621-45fc655457e1
	I0731 23:56:08.348350    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:08.348350    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:08.349055    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:08.845492    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:08.845732    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:08.845732    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:08.845732    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:08.850942    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:08.850942    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:08.850942    9020 round_trippers.go:580]     Audit-Id: a589f167-1012-4e2f-8101-eb0bcef9664b
	I0731 23:56:08.850942    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:08.850942    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:08.850942    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:08.850942    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:08.850942    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:08 GMT
	I0731 23:56:08.851520    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:08.851697    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:09.348313    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:09.348313    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:09.348313    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:09.348313    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:09.352817    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:09.352817    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:09.353224    9020 round_trippers.go:580]     Audit-Id: f7e3e44c-c69a-43c9-a50a-12b0785f5021
	I0731 23:56:09.353224    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:09.353224    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:09.353224    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:09.353224    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:09.353224    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:09 GMT
	I0731 23:56:09.353465    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:09.846437    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:09.846662    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:09.846662    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:09.846662    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:09.849976    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:09.850626    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:09.850626    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:09.850626    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:09 GMT
	I0731 23:56:09.850626    9020 round_trippers.go:580]     Audit-Id: b93926cc-7b26-4a56-96cd-ce345789698f
	I0731 23:56:09.850626    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:09.850626    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:09.850626    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:09.850705    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:10.347116    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:10.347116    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:10.347116    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:10.347116    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:10.350409    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:10.351463    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:10.351463    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:10.351463    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:10.351463    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:10.351463    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:10 GMT
	I0731 23:56:10.351463    9020 round_trippers.go:580]     Audit-Id: 2c6282cb-8c2c-4b0f-a8d5-d3245bff50a3
	I0731 23:56:10.351539    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:10.351737    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:10.845868    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:10.845986    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:10.845986    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:10.846097    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:10.849018    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:10.849018    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:10.849018    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:10.849817    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:10 GMT
	I0731 23:56:10.849817    9020 round_trippers.go:580]     Audit-Id: 85c6a43e-31fb-4cae-aaf2-8db1f6ccdb50
	I0731 23:56:10.849817    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:10.849817    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:10.849817    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:10.850147    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:11.345135    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:11.345233    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:11.345233    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:11.345233    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:11.350275    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:11.350275    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:11.350275    9020 round_trippers.go:580]     Audit-Id: e463648e-0381-493f-bc98-fc3ebc3628fa
	I0731 23:56:11.350389    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:11.350389    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:11.350389    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:11.350389    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:11.350464    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:11 GMT
	I0731 23:56:11.350551    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:11.351051    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:11.846418    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:11.846418    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:11.846548    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:11.846548    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:11.852656    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:11.852698    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:11.852698    9020 round_trippers.go:580]     Audit-Id: c82505ad-292a-472c-aa93-4b38724a3164
	I0731 23:56:11.852698    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:11.852698    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:11.852698    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:11.852698    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:11.852698    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:11 GMT
	I0731 23:56:11.852954    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:12.347010    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:12.347010    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:12.347010    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:12.347010    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:12.350630    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:12.350630    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:12.350630    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:12.350630    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:12.350630    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:12 GMT
	I0731 23:56:12.351399    9020 round_trippers.go:580]     Audit-Id: 8c055061-ffa5-4ba3-a7dc-b399e0d3d6d7
	I0731 23:56:12.351399    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:12.351399    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:12.351566    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:12.847286    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:12.847547    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:12.847547    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:12.847547    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:12.854421    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:12.854421    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:12.854421    9020 round_trippers.go:580]     Audit-Id: 49a504c1-40e8-4f07-aeb1-2231e5695319
	I0731 23:56:12.854421    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:12.854421    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:12.854421    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:12.854421    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:12.854421    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:12 GMT
	I0731 23:56:12.854977    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:13.349390    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:13.349390    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:13.349616    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:13.349616    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:13.352971    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:13.352971    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:13.352971    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:13.353179    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:13 GMT
	I0731 23:56:13.353179    9020 round_trippers.go:580]     Audit-Id: 02c63068-531d-481e-ad20-5894eda7d4b0
	I0731 23:56:13.353179    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:13.353179    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:13.353179    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:13.353382    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:13.353806    9020 node_ready.go:53] node "multinode-411400" has status "Ready":"False"
	I0731 23:56:13.844743    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:13.844743    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:13.844840    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:13.844840    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:13.847817    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:13.847817    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:13.847817    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:13.847817    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:13.847817    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:13.847817    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:13 GMT
	I0731 23:56:13.847817    9020 round_trippers.go:580]     Audit-Id: ad84d325-9653-466e-8e62-3b49d275aa82
	I0731 23:56:13.847817    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:13.848404    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:14.357831    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:14.357831    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.357930    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.357930    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.361330    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:14.361330    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.361330    9020 round_trippers.go:580]     Audit-Id: 574a782e-d523-41ce-80fc-72a8ccb6ee2f
	I0731 23:56:14.361330    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.361330    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.361330    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.361330    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.362342    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.362493    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1879","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0731 23:56:14.857183    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:14.857183    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.857183    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.857183    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.859883    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:14.859883    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.859883    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.860277    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.860277    9020 round_trippers.go:580]     Audit-Id: 589bc9d4-8881-47d3-b328-dce17e97c7bd
	I0731 23:56:14.860277    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.860277    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.860277    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.860469    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:14.860742    9020 node_ready.go:49] node "multinode-411400" has status "Ready":"True"
	I0731 23:56:14.860742    9020 node_ready.go:38] duration metric: took 15.5172045s for node "multinode-411400" to be "Ready" ...
	I0731 23:56:14.860742    9020 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:56:14.860742    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:14.860742    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.860742    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.860742    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.873449    9020 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 23:56:14.873449    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.873449    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.873449    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.873449    9020 round_trippers.go:580]     Audit-Id: 37c84592-884c-40eb-808a-09f1df5ebaa6
	I0731 23:56:14.873449    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.873449    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.873449    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.875427    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1899"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86085 chars]
	I0731 23:56:14.880051    9020 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:14.880252    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:14.880252    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.880252    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.880252    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.882842    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:14.882842    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.882842    9020 round_trippers.go:580]     Audit-Id: fe7d7c12-0f8f-42db-8d38-af28b1688ba0
	I0731 23:56:14.882842    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.882842    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.882842    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.882842    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.882842    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.883868    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:14.884402    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:14.884491    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:14.884491    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:14.884491    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:14.886700    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:14.886700    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:14.886851    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:14.886851    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:14.886851    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:14.886851    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:14.886851    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:14 GMT
	I0731 23:56:14.886851    9020 round_trippers.go:580]     Audit-Id: c52c4ee6-0d29-489d-8931-5e01e7eaa20d
	I0731 23:56:14.887121    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:15.388592    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:15.388592    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:15.388592    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:15.388592    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:15.392214    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:15.392214    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:15.392214    9020 round_trippers.go:580]     Audit-Id: cf2e8f89-6d79-4e4d-9730-cc80abd3b699
	I0731 23:56:15.392214    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:15.392214    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:15.392214    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:15.392214    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:15.392301    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:15 GMT
	I0731 23:56:15.392459    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:15.393206    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:15.393206    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:15.393206    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:15.393206    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:15.397902    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:15.397902    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:15.397902    9020 round_trippers.go:580]     Audit-Id: a5493160-594e-456a-a362-2deddff8baaa
	I0731 23:56:15.397902    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:15.397996    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:15.397996    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:15.397996    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:15.397996    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:15 GMT
	I0731 23:56:15.398207    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:15.888425    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:15.888425    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:15.888425    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:15.888611    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:15.892012    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:15.892548    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:15.892548    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:15.892548    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:15.892653    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:15 GMT
	I0731 23:56:15.892653    9020 round_trippers.go:580]     Audit-Id: b2ea96e8-3993-41dc-9c5b-0b06043182d2
	I0731 23:56:15.892653    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:15.892653    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:15.892924    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:15.894160    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:15.894160    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:15.894250    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:15.894250    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:15.896497    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:15.896497    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:15.896497    9020 round_trippers.go:580]     Audit-Id: 43b5ade4-ccb1-42f7-8b0c-95a4cd41f860
	I0731 23:56:15.896497    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:15.896497    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:15.896497    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:15.896497    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:15.897196    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:15 GMT
	I0731 23:56:15.897610    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:16.389073    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:16.389073    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:16.389073    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:16.389151    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:16.392392    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:16.392759    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:16.392759    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:16.392759    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:16.392759    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:16 GMT
	I0731 23:56:16.392759    9020 round_trippers.go:580]     Audit-Id: d2080e00-8048-4613-bba1-b00e8a265f51
	I0731 23:56:16.392759    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:16.392759    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:16.392978    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:16.393250    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:16.393250    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:16.393250    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:16.393250    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:16.396027    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:16.396609    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:16.396609    9020 round_trippers.go:580]     Audit-Id: 10f68a14-0c69-4e8a-9085-ba2c583ac703
	I0731 23:56:16.396609    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:16.396609    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:16.396609    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:16.396661    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:16.396661    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:16 GMT
	I0731 23:56:16.396747    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:16.887141    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:16.887232    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:16.887232    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:16.887232    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:16.890534    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:16.890534    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:16.890534    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:16 GMT
	I0731 23:56:16.890534    9020 round_trippers.go:580]     Audit-Id: 093d44ee-e0bc-492b-a329-4cb19fc26026
	I0731 23:56:16.890534    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:16.890534    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:16.890534    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:16.891436    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:16.891607    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:16.892298    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:16.892373    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:16.892373    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:16.892373    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:16.894612    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:16.894612    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:16.894612    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:16.894612    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:16.895274    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:16.895274    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:16.895274    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:16 GMT
	I0731 23:56:16.895274    9020 round_trippers.go:580]     Audit-Id: 464810d3-b6eb-4abb-b8c2-c0dff0dcd027
	I0731 23:56:16.895901    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:16.896626    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:17.387767    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:17.388055    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:17.388055    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:17.388055    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:17.391012    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:17.391012    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:17.391012    9020 round_trippers.go:580]     Audit-Id: bb69bc17-0d02-44b9-acca-359f08a02fc3
	I0731 23:56:17.391012    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:17.391012    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:17.391012    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:17.391012    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:17.391012    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:17 GMT
	I0731 23:56:17.391877    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:17.393155    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:17.393222    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:17.393222    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:17.393222    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:17.395492    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:17.395492    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:17.395492    9020 round_trippers.go:580]     Audit-Id: 1def7d6e-2944-4db5-8634-58494d036cd8
	I0731 23:56:17.395492    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:17.395492    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:17.395492    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:17.395492    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:17.395492    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:17 GMT
	I0731 23:56:17.396223    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:17.888543    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:17.888833    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:17.888833    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:17.888833    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:17.893151    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:17.893151    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:17.893151    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:17 GMT
	I0731 23:56:17.893151    9020 round_trippers.go:580]     Audit-Id: 46e95556-186f-4868-aec8-b9c80967620e
	I0731 23:56:17.893151    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:17.893151    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:17.893151    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:17.893151    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:17.893622    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:17.894329    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:17.894329    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:17.894329    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:17.894329    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:17.896933    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:17.896933    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:17.896933    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:17.897575    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:17.897575    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:17.897575    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:17 GMT
	I0731 23:56:17.897575    9020 round_trippers.go:580]     Audit-Id: 7df99dd4-90d8-4872-aec7-2fcb578558de
	I0731 23:56:17.897575    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:17.897803    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:18.387607    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:18.387607    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:18.387607    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:18.387607    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:18.394804    9020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 23:56:18.394804    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:18.394804    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:18.394804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:18.394804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:18.394804    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:18 GMT
	I0731 23:56:18.394804    9020 round_trippers.go:580]     Audit-Id: f092b29a-e6cf-4403-ab9c-568c41cf380f
	I0731 23:56:18.394804    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:18.395562    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:18.396439    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:18.396439    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:18.396439    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:18.396439    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:18.398805    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:18.398805    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:18.398805    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:18 GMT
	I0731 23:56:18.398805    9020 round_trippers.go:580]     Audit-Id: 109d4364-a622-4578-972b-3ae52dfed42b
	I0731 23:56:18.398805    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:18.398805    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:18.398805    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:18.398805    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:18.398805    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:18.887062    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:18.887062    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:18.887062    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:18.887062    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:18.890724    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:18.891286    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:18.891286    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:18.891286    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:18.891286    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:18 GMT
	I0731 23:56:18.891286    9020 round_trippers.go:580]     Audit-Id: 30fe43a5-365d-486d-a2b1-59315de60a6f
	I0731 23:56:18.891286    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:18.891286    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:18.892492    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:18.893355    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:18.893355    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:18.893355    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:18.893355    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:18.896317    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:18.896435    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:18.896435    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:18.896435    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:18.896435    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:18 GMT
	I0731 23:56:18.896435    9020 round_trippers.go:580]     Audit-Id: e79c7cbd-4b85-413a-9d72-1663a173cdea
	I0731 23:56:18.896435    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:18.896435    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:18.896435    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:18.897363    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:19.386336    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:19.386336    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:19.386336    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:19.386336    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:19.389062    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:19.389062    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:19.389062    9020 round_trippers.go:580]     Audit-Id: 6e8851cd-feb0-4c3b-a3a7-604ef81e1397
	I0731 23:56:19.389062    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:19.389062    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:19.389062    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:19.389062    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:19.389062    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:19 GMT
	I0731 23:56:19.390420    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:19.391179    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:19.391268    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:19.391268    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:19.391392    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:19.396406    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:19.396406    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:19.396406    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:19.396406    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:19 GMT
	I0731 23:56:19.396406    9020 round_trippers.go:580]     Audit-Id: 04a5da4b-5f0f-443b-a710-c04dcdc0a60c
	I0731 23:56:19.396406    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:19.396406    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:19.396406    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:19.397201    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:19.887227    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:19.887227    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:19.887227    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:19.887373    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:19.890598    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:19.890598    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:19.890598    9020 round_trippers.go:580]     Audit-Id: 9d5e8324-334f-44d5-b655-70d8d527bcfa
	I0731 23:56:19.890598    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:19.890598    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:19.890598    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:19.890598    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:19.891031    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:19 GMT
	I0731 23:56:19.891389    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:19.892130    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:19.892183    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:19.892183    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:19.892183    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:19.898097    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:19.898097    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:19.898097    9020 round_trippers.go:580]     Audit-Id: 0faf23c3-b7b6-4c77-b0b3-f3612378ac5e
	I0731 23:56:19.898097    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:19.898097    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:19.898097    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:19.898097    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:19.898097    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:19 GMT
	I0731 23:56:19.898740    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:20.388688    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:20.388688    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:20.388752    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:20.388752    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:20.393730    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:20.393956    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:20.393956    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:20.393956    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:20.393956    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:20 GMT
	I0731 23:56:20.393956    9020 round_trippers.go:580]     Audit-Id: e479125b-8a09-4b2e-b3a8-d94dc62e98e8
	I0731 23:56:20.394002    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:20.394091    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:20.394119    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:20.394968    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:20.394968    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:20.394968    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:20.394968    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:20.399256    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:20.399256    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:20.399477    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:20.399477    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:20.399477    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:20.399477    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:20.399477    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:20 GMT
	I0731 23:56:20.399477    9020 round_trippers.go:580]     Audit-Id: 0e9d1bff-749b-4cbf-ad89-1c2f062633ab
	I0731 23:56:20.399542    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:20.888613    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:20.888613    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:20.888613    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:20.888613    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:20.892081    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:20.892081    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:20.892081    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:20.892081    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:20 GMT
	I0731 23:56:20.892081    9020 round_trippers.go:580]     Audit-Id: 5bd2d2c0-8daa-46bb-9de2-ae98d0b7c172
	I0731 23:56:20.892081    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:20.892081    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:20.892081    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:20.892859    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:20.893681    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:20.893681    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:20.893681    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:20.893681    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:20.896998    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:20.896998    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:20.896998    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:20 GMT
	I0731 23:56:20.896998    9020 round_trippers.go:580]     Audit-Id: 10b0fe75-34de-4fdf-b8fa-e44a6090ee94
	I0731 23:56:20.896998    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:20.896998    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:20.897105    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:20.897105    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:20.897320    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:20.897714    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:21.387769    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:21.387769    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:21.388239    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:21.388239    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:21.391611    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:21.391611    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:21.391947    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:21.391947    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:21.391947    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:21.391947    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:21.391947    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:21 GMT
	I0731 23:56:21.391947    9020 round_trippers.go:580]     Audit-Id: 64ba2336-f6ff-47bd-86c2-c1afc6db4132
	I0731 23:56:21.392239    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:21.393005    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:21.393005    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:21.393005    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:21.393005    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:21.396585    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:21.396585    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:21.396925    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:21.396925    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:21.396925    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:21 GMT
	I0731 23:56:21.396925    9020 round_trippers.go:580]     Audit-Id: f7dbf91c-fc37-4375-a9d9-767c83f7e370
	I0731 23:56:21.396925    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:21.396925    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:21.396925    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:21.888388    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:21.888467    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:21.888467    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:21.888467    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:21.892747    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:21.893677    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:21.893677    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:21.893737    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:21.893737    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:21.893737    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:21 GMT
	I0731 23:56:21.893737    9020 round_trippers.go:580]     Audit-Id: 521ac87a-69bd-4053-a50a-19b0f547cdd3
	I0731 23:56:21.893737    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:21.893929    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:21.894802    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:21.894964    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:21.894964    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:21.894964    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:21.896781    9020 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 23:56:21.896781    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:21.896781    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:21.897634    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:21.897634    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:21.897634    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:21.897634    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:21 GMT
	I0731 23:56:21.897634    9020 round_trippers.go:580]     Audit-Id: 177885c9-396b-4efc-b5b2-645e49be7083
	I0731 23:56:21.898058    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:22.386386    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:22.386386    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:22.386386    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:22.386386    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:22.389998    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:22.390910    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:22.390910    9020 round_trippers.go:580]     Audit-Id: aef99beb-d04a-423f-a94d-5a5a9db20703
	I0731 23:56:22.390910    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:22.390910    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:22.390910    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:22.391016    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:22.391016    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:22 GMT
	I0731 23:56:22.391253    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:22.392200    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:22.392256    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:22.392256    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:22.392256    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:22.395022    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:22.395679    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:22.395679    9020 round_trippers.go:580]     Audit-Id: 01cdb5bb-f8e8-4e92-9665-bc9d14aed9ec
	I0731 23:56:22.395679    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:22.395679    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:22.395679    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:22.395679    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:22.395679    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:22 GMT
	I0731 23:56:22.395971    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:22.883846    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:22.883911    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:22.883911    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:22.883911    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:22.887935    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:22.887935    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:22.888500    9020 round_trippers.go:580]     Audit-Id: 6f0c593d-51ed-4af5-af59-c8efc6596c3c
	I0731 23:56:22.888500    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:22.888500    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:22.888500    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:22.888500    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:22.888500    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:22 GMT
	I0731 23:56:22.888614    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:22.889329    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:22.889329    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:22.889329    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:22.889329    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:22.893001    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:22.893001    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:22.893001    9020 round_trippers.go:580]     Audit-Id: cdf80642-cf63-4cb8-987a-64b2918d1fdb
	I0731 23:56:22.893001    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:22.893819    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:22.893819    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:22.893819    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:22.893819    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:22 GMT
	I0731 23:56:22.894135    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:23.384893    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:23.384893    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:23.384893    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:23.384893    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:23.388570    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:23.388982    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:23.388982    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:23.388982    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:23 GMT
	I0731 23:56:23.388982    9020 round_trippers.go:580]     Audit-Id: cf0612c6-3569-4bd6-bc70-799d3ee49ace
	I0731 23:56:23.388982    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:23.388982    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:23.388982    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:23.388982    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:23.389967    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:23.389967    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:23.390076    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:23.390076    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:23.392993    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:23.394031    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:23.394031    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:23.394082    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:23 GMT
	I0731 23:56:23.394082    9020 round_trippers.go:580]     Audit-Id: 9b351eaf-4a10-450a-b634-ae42985d693f
	I0731 23:56:23.394082    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:23.394082    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:23.394082    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:23.394082    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:23.394814    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:23.884639    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:23.884639    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:23.884639    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:23.884639    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:23.887694    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:23.887694    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:23.887694    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:23 GMT
	I0731 23:56:23.887694    9020 round_trippers.go:580]     Audit-Id: 93902d49-f1a9-461b-98e5-5356e59b4e0a
	I0731 23:56:23.887694    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:23.887694    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:23.887694    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:23.887694    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:23.888466    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:23.889139    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:23.889139    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:23.889139    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:23.889139    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:23.891848    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:23.891848    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:23.891848    9020 round_trippers.go:580]     Audit-Id: 13c25a5d-0902-44aa-ab2e-d0a968892769
	I0731 23:56:23.891848    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:23.891848    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:23.891848    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:23.891848    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:23.891848    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:23 GMT
	I0731 23:56:23.892598    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:24.384288    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:24.384383    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:24.384383    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:24.384383    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:24.387749    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:24.387749    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:24.387749    9020 round_trippers.go:580]     Audit-Id: ab1c334e-b8e9-4a2e-859c-3a2ad4d86194
	I0731 23:56:24.387749    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:24.387749    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:24.388282    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:24.388282    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:24.388282    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:24 GMT
	I0731 23:56:24.388494    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:24.389002    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:24.389002    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:24.389002    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:24.389002    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:24.391568    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:24.391568    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:24.391568    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:24.391568    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:24.391568    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:24 GMT
	I0731 23:56:24.391568    9020 round_trippers.go:580]     Audit-Id: 5a4fec8e-d1f7-4c71-9124-1d025f1f12b7
	I0731 23:56:24.391568    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:24.391568    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:24.392453    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:24.882755    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:24.882755    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:24.882755    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:24.882755    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:24.885843    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:24.885843    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:24.885843    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:24 GMT
	I0731 23:56:24.886260    9020 round_trippers.go:580]     Audit-Id: ec5a7e9b-3681-414e-ab18-7f987b8cff3c
	I0731 23:56:24.886260    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:24.886260    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:24.886260    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:24.886260    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:24.886455    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:24.887004    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:24.887213    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:24.887213    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:24.887213    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:24.889584    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:24.889584    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:24.889584    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:24.889584    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:24 GMT
	I0731 23:56:24.889584    9020 round_trippers.go:580]     Audit-Id: 6ee97c38-0151-4eb6-8950-d6ed4def04ba
	I0731 23:56:24.889584    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:24.889584    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:24.889584    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:24.890317    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:25.382836    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:25.382836    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:25.382836    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:25.382931    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:25.385510    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:25.385510    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:25.385510    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:25.385510    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:25.385510    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:25 GMT
	I0731 23:56:25.385510    9020 round_trippers.go:580]     Audit-Id: d63181f8-17e6-4ddf-9a96-846d9a786cbd
	I0731 23:56:25.385510    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:25.385510    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:25.386767    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:25.387637    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:25.387637    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:25.387743    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:25.387743    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:25.391732    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:25.391732    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:25.391732    9020 round_trippers.go:580]     Audit-Id: 4b4aba75-923a-4655-b330-901e69a44ebc
	I0731 23:56:25.391732    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:25.391732    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:25.391732    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:25.391732    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:25.391732    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:25 GMT
	I0731 23:56:25.392482    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:25.881384    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:25.881476    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:25.881476    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:25.881476    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:25.885747    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:56:25.885747    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:25.885747    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:25.885747    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:25.886143    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:25.886143    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:25 GMT
	I0731 23:56:25.886143    9020 round_trippers.go:580]     Audit-Id: 37e984d0-c6e3-40ae-83ec-20ca3a71dd73
	I0731 23:56:25.886143    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:25.886308    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:25.886957    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:25.886957    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:25.886957    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:25.886957    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:25.889589    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:25.889589    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:25.889589    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:25.889589    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:25.889589    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:25 GMT
	I0731 23:56:25.889589    9020 round_trippers.go:580]     Audit-Id: 021a7b58-1509-4010-bfaa-df2824bca110
	I0731 23:56:25.889589    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:25.889589    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:25.890651    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:25.890651    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:26.383836    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:26.384048    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:26.384048    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:26.384048    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:26.390904    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:26.390904    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:26.390904    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:26 GMT
	I0731 23:56:26.390904    9020 round_trippers.go:580]     Audit-Id: 75d02ceb-30ae-41b1-8d46-579a0df43af5
	I0731 23:56:26.390904    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:26.390904    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:26.390904    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:26.390904    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:26.391529    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:26.392340    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:26.392340    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:26.392340    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:26.392340    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:26.394899    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:26.394899    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:26.394899    9020 round_trippers.go:580]     Audit-Id: 26c9bdcc-5634-40df-8d83-295b3e2afa2c
	I0731 23:56:26.395679    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:26.395679    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:26.395679    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:26.395679    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:26.395679    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:26 GMT
	I0731 23:56:26.396027    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:26.883719    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:26.883774    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:26.883824    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:26.883824    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:26.890203    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:26.890203    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:26.890203    9020 round_trippers.go:580]     Audit-Id: 4b90f8cd-5a24-4ae9-916c-50da0cfd1ec6
	I0731 23:56:26.890203    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:26.890203    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:26.890203    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:26.890203    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:26.890203    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:26 GMT
	I0731 23:56:26.896347    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:26.897092    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:26.897147    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:26.897147    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:26.897212    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:26.899919    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:26.900796    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:26.900796    9020 round_trippers.go:580]     Audit-Id: 3bf4b77c-2528-4a14-ae4c-3011248e54cb
	I0731 23:56:26.900796    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:26.900796    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:26.900796    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:26.900796    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:26.900796    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:26 GMT
	I0731 23:56:26.901274    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:27.388511    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:27.388687    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:27.388745    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:27.388745    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:27.395020    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:27.395020    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:27.395020    9020 round_trippers.go:580]     Audit-Id: f8120cc1-d868-4e6b-b3ee-64bc4344d06d
	I0731 23:56:27.395020    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:27.395020    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:27.395020    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:27.395020    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:27.395020    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:27 GMT
	I0731 23:56:27.395020    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:27.395800    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:27.396380    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:27.396380    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:27.396480    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:27.399385    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:27.399385    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:27.399385    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:27.399385    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:27 GMT
	I0731 23:56:27.399385    9020 round_trippers.go:580]     Audit-Id: bbc7fe47-283c-454d-b35f-1bc2fbfc3c43
	I0731 23:56:27.399385    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:27.399385    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:27.399385    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:27.399987    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:27.894470    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:27.894470    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:27.894470    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:27.894470    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:27.901451    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:27.901451    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:27.901451    9020 round_trippers.go:580]     Audit-Id: 16a3de77-1669-416a-acd0-1581f8142e97
	I0731 23:56:27.901451    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:27.901451    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:27.901451    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:27.901451    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:27.901451    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:27 GMT
	I0731 23:56:27.901451    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:27.902471    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:27.902471    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:27.902471    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:27.902471    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:27.918451    9020 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 23:56:27.918451    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:27.918451    9020 round_trippers.go:580]     Audit-Id: 479c258a-cff9-490e-96a4-45b988dcba5a
	I0731 23:56:27.918451    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:27.918451    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:27.918451    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:27.918451    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:27.918451    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:27 GMT
	I0731 23:56:27.918451    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:27.919506    9020 pod_ready.go:102] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"False"
	I0731 23:56:28.380993    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:28.381188    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:28.381188    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:28.381188    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:28.384593    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:28.384593    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:28.384593    9020 round_trippers.go:580]     Audit-Id: 653e81d3-4b9b-4807-b572-c2e48bf48a95
	I0731 23:56:28.384593    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:28.384593    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:28.384593    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:28.384593    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:28.384593    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:28 GMT
	I0731 23:56:28.385765    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:28.386500    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:28.386500    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:28.386600    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:28.386600    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:28.389446    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:28.389827    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:28.389827    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:28 GMT
	I0731 23:56:28.389827    9020 round_trippers.go:580]     Audit-Id: 8d67c5f9-338b-49b5-b788-43263125a8cc
	I0731 23:56:28.389827    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:28.389827    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:28.389827    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:28.389827    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:28.390483    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:28.880670    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:28.880670    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:28.880670    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:28.880670    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:28.883325    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:28.883325    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:28.883325    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:28.883325    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:28.883325    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:28.883325    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:28 GMT
	I0731 23:56:28.883325    9020 round_trippers.go:580]     Audit-Id: 9368ece9-1eb4-44b8-b69a-7d5b859df901
	I0731 23:56:28.883325    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:28.884769    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1781","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6839 chars]
	I0731 23:56:28.885425    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:28.885425    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:28.885425    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:28.885425    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:28.887977    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:28.887977    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:28.888396    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:28.888396    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:28.888396    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:28.888396    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:28.888396    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:28 GMT
	I0731 23:56:28.888396    9020 round_trippers.go:580]     Audit-Id: bca32cbb-9ed7-4ee3-b4e0-b8c3a6f118fa
	I0731 23:56:28.888700    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.384352    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:56:29.384478    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.384478    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.384548    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.386841    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.387891    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.387891    9020 round_trippers.go:580]     Audit-Id: 089e5977-4235-4abf-a014-57e1ea51d78a
	I0731 23:56:29.387923    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.387923    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.387923    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.387923    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.387923    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.388088    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0731 23:56:29.389420    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.389420    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.389420    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.389420    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.391804    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.391804    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.391804    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.391804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.391804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.391804    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.391804    9020 round_trippers.go:580]     Audit-Id: f9734abd-3fdd-48dd-80fc-2948a33e4bb5
	I0731 23:56:29.391804    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.392898    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.393554    9020 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.393554    9020 pod_ready.go:81] duration metric: took 14.5132735s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.393554    9020 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.393645    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-411400
	I0731 23:56:29.393717    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.393745    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.393745    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.396382    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.396382    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.396382    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.396630    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.396630    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.396630    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.396630    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.396630    9020 round_trippers.go:580]     Audit-Id: eeb3ab7c-fc43-472d-8204-482d39151085
	I0731 23:56:29.396815    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"4de1ad7a-3a8e-4823-9430-fadd76753763","resourceVersion":"1862","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.27.27:2379","kubernetes.io/config.hash":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.mirror":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.seen":"2024-07-31T23:55:48.969840438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0731 23:56:29.397450    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.397450    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.397450    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.397450    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.401037    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.401287    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.401287    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.401287    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.401287    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.401287    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.401287    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.401287    9020 round_trippers.go:580]     Audit-Id: a6cfbdb4-869d-4aee-941e-7f5b27ec2b3f
	I0731 23:56:29.401459    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.401459    9020 pod_ready.go:92] pod "etcd-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.401459    9020 pod_ready.go:81] duration metric: took 7.9045ms for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.402007    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.402185    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-411400
	I0731 23:56:29.402185    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.402185    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.402185    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.405090    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.405090    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.405090    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.405090    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.405090    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.405090    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.405090    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.405525    9020 round_trippers.go:580]     Audit-Id: 85742812-a408-4011-873d-933b216a699a
	I0731 23:56:29.405794    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-411400","namespace":"kube-system","uid":"eaabee4a-7fb0-455f-b354-3fae71ca2878","resourceVersion":"1864","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.27.27:8443","kubernetes.io/config.hash":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.mirror":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.seen":"2024-07-31T23:55:48.898321781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0731 23:56:29.405794    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.405794    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.406339    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.406339    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.408569    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.408569    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.408569    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.408569    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.408569    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.408569    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.408569    9020 round_trippers.go:580]     Audit-Id: 466479ff-e93d-41d8-b4b8-6e479140ac45
	I0731 23:56:29.408569    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.408569    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.408569    9020 pod_ready.go:92] pod "kube-apiserver-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.408569    9020 pod_ready.go:81] duration metric: took 6.5613ms for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.408569    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.408569    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400
	I0731 23:56:29.408569    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.408569    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.409566    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.412333    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.412333    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.412333    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.412333    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.412333    9020 round_trippers.go:580]     Audit-Id: 7d22dab1-7c02-4590-8454-9d55e54df448
	I0731 23:56:29.412333    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.412333    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.412333    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.413220    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-411400","namespace":"kube-system","uid":"217a4087-49b2-4b74-a094-e027a51cf503","resourceVersion":"1891","creationTimestamp":"2024-07-31T23:32:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.mirror":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.seen":"2024-07-31T23:32:18.716560513Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0731 23:56:29.413899    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.413899    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.413967    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.413967    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.417721    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.417772    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.417787    9020 round_trippers.go:580]     Audit-Id: 9a99f76a-d4eb-40c0-86d4-6d9c6a1fbdb8
	I0731 23:56:29.417787    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.417787    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.417787    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.417787    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.417787    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.417888    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.418753    9020 pod_ready.go:92] pod "kube-controller-manager-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.418753    9020 pod_ready.go:81] duration metric: took 10.1843ms for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.418753    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.418927    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:56:29.418927    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.418927    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.419031    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.422058    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.422058    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.422058    9020 round_trippers.go:580]     Audit-Id: fcd95dbd-a00b-4ef2-92c9-3f98beb27867
	I0731 23:56:29.422058    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.422058    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.422058    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.422058    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.422058    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.422920    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5j8pv","generateName":"kube-proxy-","namespace":"kube-system","uid":"761c8479-d25f-4142-93b6-23b0d1e3ccb7","resourceVersion":"1748","creationTimestamp":"2024-07-31T23:40:31Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0731 23:56:29.422920    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:56:29.422920    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.422920    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.422920    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.425403    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.425403    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.425403    9020 round_trippers.go:580]     Audit-Id: 32637fb7-462a-474d-bfc2-8d1d42a0a168
	I0731 23:56:29.425964    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.425964    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.425964    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.425964    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.425964    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.426030    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m03","uid":"3753504a-97f6-4be0-809b-ee84cbf38121","resourceVersion":"1888","creationTimestamp":"2024-07-31T23:51:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_51_16_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:51:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0731 23:56:29.426668    9020 pod_ready.go:97] node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:56:29.426668    9020 pod_ready.go:81] duration metric: took 7.9147ms for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	E0731 23:56:29.426668    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:56:29.426668    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.586997    9020 request.go:629] Waited for 160.1041ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:56:29.587193    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:56:29.587193    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.587193    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.587193    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.590269    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.590269    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.590269    9020 round_trippers.go:580]     Audit-Id: 02ff3148-d78d-40c2-9478-fa8a33f2ee59
	I0731 23:56:29.590269    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.590269    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.590269    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.590617    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.590617    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.590802    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-chdxg","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3405391-f4cb-4ffe-8d51-d669e37d0a3b","resourceVersion":"1853","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0731 23:56:29.789851    9020 request.go:629] Waited for 198.3558ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.789851    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:29.789851    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.789851    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.789851    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.792663    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:56:29.793431    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.793431    9020 round_trippers.go:580]     Audit-Id: 3e29ee6c-2269-4f65-8bba-b17357f5e4d2
	I0731 23:56:29.793431    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.793431    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.793431    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.793431    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.793431    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:29 GMT
	I0731 23:56:29.793741    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:29.794650    9020 pod_ready.go:92] pod "kube-proxy-chdxg" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:29.794717    9020 pod_ready.go:81] duration metric: took 368.0448ms for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.794717    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:29.990445    9020 request.go:629] Waited for 195.6325ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:56:29.990574    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:56:29.990773    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:29.990854    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:29.990854    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:29.994168    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:29.994168    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:29.994168    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:29.994168    9020 round_trippers.go:580]     Audit-Id: 36360ef5-72e8-4a2d-b9c9-53238bfa3c44
	I0731 23:56:29.994168    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:29.994168    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:29.994479    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:29.994479    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:29.994759    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g7tpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"c8356e2e-b324-4001-9b82-18a13b436517","resourceVersion":"610","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0731 23:56:30.192779    9020 request.go:629] Waited for 197.1053ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:56:30.192889    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:56:30.192889    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.192889    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.192889    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.196430    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:30.197162    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.197162    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.197162    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.197162    9020 round_trippers.go:580]     Audit-Id: 5138a96d-f031-40b1-8d91-24e4834636d6
	I0731 23:56:30.197162    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.197226    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.197226    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.197226    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36","resourceVersion":"1679","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_35_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3825 chars]
	I0731 23:56:30.197947    9020 pod_ready.go:92] pod "kube-proxy-g7tpl" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:30.197947    9020 pod_ready.go:81] duration metric: took 403.2245ms for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:30.197947    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:30.395990    9020 request.go:629] Waited for 198.0404ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:56:30.396531    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:56:30.396531    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.396531    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.396531    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.402405    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:30.402629    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.402629    9020 round_trippers.go:580]     Audit-Id: 40a2f5d7-df03-4dd8-a127-9a76c1cc242e
	I0731 23:56:30.402629    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.402629    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.402629    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.402683    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.402683    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.402820    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-411400","namespace":"kube-system","uid":"a10cf66c-3049-48d4-9ab1-8667efc59977","resourceVersion":"1875","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.mirror":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.seen":"2024-07-31T23:32:26.731395457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0731 23:56:30.598146    9020 request.go:629] Waited for 194.2236ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:30.598146    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:56:30.598146    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.598146    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.598264    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.612592    9020 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 23:56:30.612592    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.612592    9020 round_trippers.go:580]     Audit-Id: edc593f6-368d-4520-bd68-daeb48e250ba
	I0731 23:56:30.612592    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.612592    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.612592    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.612592    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.613043    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.613464    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:56:30.614017    9020 pod_ready.go:92] pod "kube-scheduler-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:56:30.614072    9020 pod_ready.go:81] duration metric: took 416.12ms for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:56:30.614072    9020 pod_ready.go:38] duration metric: took 15.7531301s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:56:30.614187    9020 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:56:30.625019    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:56:30.650486    9020 command_runner.go:130] > 1911
	I0731 23:56:30.650486    9020 api_server.go:72] duration metric: took 31.6537148s to wait for apiserver process to appear ...
	I0731 23:56:30.650733    9020 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:56:30.650733    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:56:30.658677    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 200:
	ok
	I0731 23:56:30.659725    9020 round_trippers.go:463] GET https://172.17.27.27:8443/version
	I0731 23:56:30.659773    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.659773    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.659807    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.660666    9020 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 23:56:30.660666    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.660666    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.661446    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Content-Length: 263
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Audit-Id: 3fab2983-e7f7-4244-9f68-bff2e1b9b479
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.661446    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.661500    9020 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0731 23:56:30.661500    9020 api_server.go:141] control plane version: v1.30.3
	I0731 23:56:30.661565    9020 api_server.go:131] duration metric: took 10.832ms to wait for apiserver health ...
	I0731 23:56:30.661565    9020 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:56:30.800066    9020 request.go:629] Waited for 138.0565ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:30.800159    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:30.800159    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.800159    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.800159    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.805401    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:56:30.805401    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.805401    9020 round_trippers.go:580]     Audit-Id: 3096a232-ffc7-4c7d-b65f-3acb0c901018
	I0731 23:56:30.805401    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.805401    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.805401    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.805401    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.805401    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:30 GMT
	I0731 23:56:30.809057    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86445 chars]
	I0731 23:56:30.813307    9020 system_pods.go:59] 12 kube-system pods found
	I0731 23:56:30.813396    9020 system_pods.go:61] "coredns-7db6d8ff4d-z8gtw" [41ddb3a7-8405-49e7-88fb-41ab6278e4af] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "etcd-multinode-411400" [4de1ad7a-3a8e-4823-9430-fadd76753763] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "kindnet-bgnqq" [7bb015d3-5a3f-4be8-861c-b29fb76da15c] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "kindnet-cxs2b" [04d92937-d48a-4a21-b4ce-adb78d3cad7f] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "kindnet-j8slc" [d77d4517-d9d3-46d9-a231-1496684afe1d] Running
	I0731 23:56:30.813396    9020 system_pods.go:61] "kube-apiserver-multinode-411400" [eaabee4a-7fb0-455f-b354-3fae71ca2878] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-controller-manager-multinode-411400" [217a4087-49b2-4b74-a094-e027a51cf503] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-proxy-5j8pv" [761c8479-d25f-4142-93b6-23b0d1e3ccb7] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-proxy-chdxg" [f3405391-f4cb-4ffe-8d51-d669e37d0a3b] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-proxy-g7tpl" [c8356e2e-b324-4001-9b82-18a13b436517] Running
	I0731 23:56:30.813491    9020 system_pods.go:61] "kube-scheduler-multinode-411400" [a10cf66c-3049-48d4-9ab1-8667efc59977] Running
	I0731 23:56:30.813535    9020 system_pods.go:61] "storage-provisioner" [f33ea8e6-6b88-471e-a471-d3c4faf9de93] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:56:30.813568    9020 system_pods.go:74] duration metric: took 152.0008ms to wait for pod list to return data ...
	I0731 23:56:30.813568    9020 default_sa.go:34] waiting for default service account to be created ...
	I0731 23:56:30.986667    9020 request.go:629] Waited for 172.7652ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/default/serviceaccounts
	I0731 23:56:30.986750    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/default/serviceaccounts
	I0731 23:56:30.986909    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:30.986909    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:30.986909    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:30.990685    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:30.990810    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Content-Length: 262
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:31 GMT
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Audit-Id: d8540bf8-7b0d-4002-8a5e-e23b3f9bc435
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:30.990810    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:30.990810    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:30.990810    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:30.990908    9020 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"16d02427-a81b-4fff-a90d-597cdeb70239","resourceVersion":"315","creationTimestamp":"2024-07-31T23:32:40Z"}}]}
	I0731 23:56:30.990973    9020 default_sa.go:45] found service account: "default"
	I0731 23:56:30.990973    9020 default_sa.go:55] duration metric: took 177.4028ms for default service account to be created ...
	I0731 23:56:30.990973    9020 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 23:56:31.191030    9020 request.go:629] Waited for 199.763ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:31.191030    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:56:31.191030    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:31.191030    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:31.191149    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:31.197931    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:56:31.197931    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:31.197931    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:31 GMT
	I0731 23:56:31.197931    9020 round_trippers.go:580]     Audit-Id: 01c5fa32-b66a-4bcc-81e7-3a631d8922a5
	I0731 23:56:31.197931    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:31.197931    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:31.197931    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:31.197931    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:31.200783    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86445 chars]
	I0731 23:56:31.205819    9020 system_pods.go:86] 12 kube-system pods found
	I0731 23:56:31.205819    9020 system_pods.go:89] "coredns-7db6d8ff4d-z8gtw" [41ddb3a7-8405-49e7-88fb-41ab6278e4af] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "etcd-multinode-411400" [4de1ad7a-3a8e-4823-9430-fadd76753763] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kindnet-bgnqq" [7bb015d3-5a3f-4be8-861c-b29fb76da15c] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kindnet-cxs2b" [04d92937-d48a-4a21-b4ce-adb78d3cad7f] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kindnet-j8slc" [d77d4517-d9d3-46d9-a231-1496684afe1d] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-apiserver-multinode-411400" [eaabee4a-7fb0-455f-b354-3fae71ca2878] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-controller-manager-multinode-411400" [217a4087-49b2-4b74-a094-e027a51cf503] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-proxy-5j8pv" [761c8479-d25f-4142-93b6-23b0d1e3ccb7] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-proxy-chdxg" [f3405391-f4cb-4ffe-8d51-d669e37d0a3b] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-proxy-g7tpl" [c8356e2e-b324-4001-9b82-18a13b436517] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "kube-scheduler-multinode-411400" [a10cf66c-3049-48d4-9ab1-8667efc59977] Running
	I0731 23:56:31.205819    9020 system_pods.go:89] "storage-provisioner" [f33ea8e6-6b88-471e-a471-d3c4faf9de93] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:56:31.205819    9020 system_pods.go:126] duration metric: took 214.8437ms to wait for k8s-apps to be running ...
	I0731 23:56:31.205819    9020 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:56:31.215985    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:56:31.241087    9020 system_svc.go:56] duration metric: took 35.2672ms WaitForService to wait for kubelet
	I0731 23:56:31.241759    9020 kubeadm.go:582] duration metric: took 32.24498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:56:31.241759    9020 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:56:31.394062    9020 request.go:629] Waited for 152.1922ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes
	I0731 23:56:31.394241    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes
	I0731 23:56:31.394241    9020 round_trippers.go:469] Request Headers:
	I0731 23:56:31.394241    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:56:31.394241    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:56:31.398061    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:56:31.398061    9020 round_trippers.go:577] Response Headers:
	I0731 23:56:31.398061    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:56:31 GMT
	I0731 23:56:31.398061    9020 round_trippers.go:580]     Audit-Id: 6beca473-58a1-411f-b3b4-5911b8ce6cb2
	I0731 23:56:31.398061    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:56:31.398061    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:56:31.398061    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:56:31.398061    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:56:31.399068    9020 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15498 chars]
	I0731 23:56:31.399854    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:56:31.399854    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:56:31.399854    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:56:31.399854    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:56:31.399854    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:56:31.399854    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:56:31.399854    9020 node_conditions.go:105] duration metric: took 158.0935ms to run NodePressure ...
	I0731 23:56:31.399854    9020 start.go:241] waiting for startup goroutines ...
	I0731 23:56:31.399854    9020 start.go:246] waiting for cluster config update ...
	I0731 23:56:31.399854    9020 start.go:255] writing updated cluster config ...
	I0731 23:56:31.405016    9020 out.go:177] 
	I0731 23:56:31.408456    9020 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:56:31.417457    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:56:31.417817    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:56:31.424146    9020 out.go:177] * Starting "multinode-411400-m02" worker node in "multinode-411400" cluster
	I0731 23:56:31.426557    9020 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:56:31.426557    9020 cache.go:56] Caching tarball of preloaded images
	I0731 23:56:31.427490    9020 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 23:56:31.427490    9020 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 23:56:31.428157    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:56:31.430754    9020 start.go:360] acquireMachinesLock for multinode-411400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:56:31.430920    9020 start.go:364] duration metric: took 77.8µs to acquireMachinesLock for "multinode-411400-m02"
	I0731 23:56:31.431066    9020 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:56:31.431066    9020 fix.go:54] fixHost starting: m02
	I0731 23:56:31.431918    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:33.459032    9020 main.go:141] libmachine: [stdout =====>] : Off
	
	I0731 23:56:33.459737    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:33.459737    9020 fix.go:112] recreateIfNeeded on multinode-411400-m02: state=Stopped err=<nil>
	W0731 23:56:33.459737    9020 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:56:33.466096    9020 out.go:177] * Restarting existing hyperv VM for "multinode-411400-m02" ...
	I0731 23:56:33.468742    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-411400-m02
	I0731 23:56:36.455289    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:36.455289    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:36.455289    9020 main.go:141] libmachine: Waiting for host to start...
	I0731 23:56:36.455538    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:38.679348    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:56:38.680031    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:38.680031    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:56:41.113511    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:41.113980    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:42.121759    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:44.317525    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:56:44.317525    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:44.317757    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:56:46.787395    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:46.787395    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:47.791046    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:49.968009    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:56:49.969035    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:49.969035    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:56:52.514205    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:52.514205    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:53.526893    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:56:55.692596    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:56:55.692596    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:55.692670    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:56:58.197663    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:56:58.197763    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:56:59.213925    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:01.450438    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:01.450438    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:01.450885    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:03.993253    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:03.993253    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:03.996232    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:06.166158    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:06.166158    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:06.166313    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:08.620940    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:08.622061    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:08.622252    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:57:08.625065    9020 machine.go:94] provisionDockerMachine start ...
	I0731 23:57:08.625065    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:10.780775    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:10.781016    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:10.781110    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:13.290807    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:13.290807    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:13.296391    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:13.297265    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:13.297265    9020 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:57:13.430640    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 23:57:13.430640    9020 buildroot.go:166] provisioning hostname "multinode-411400-m02"
	I0731 23:57:13.430746    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:15.582351    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:15.582889    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:15.582889    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:18.111516    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:18.111516    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:18.118209    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:18.119099    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:18.119099    9020 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-411400-m02 && echo "multinode-411400-m02" | sudo tee /etc/hostname
	I0731 23:57:18.273593    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-411400-m02
	
	I0731 23:57:18.273753    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:20.411341    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:20.411341    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:20.411341    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:22.938968    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:22.938968    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:22.946189    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:22.946890    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:22.946890    9020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-411400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-411400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-411400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:57:23.091608    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:57:23.091608    9020 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0731 23:57:23.092148    9020 buildroot.go:174] setting up certificates
	I0731 23:57:23.092188    9020 provision.go:84] configureAuth start
	I0731 23:57:23.092188    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:25.189438    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:25.189438    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:25.189646    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:27.671106    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:27.671106    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:27.672135    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:29.762607    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:29.763680    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:29.763680    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:32.228880    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:32.228880    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:32.228880    9020 provision.go:143] copyHostCerts
	I0731 23:57:32.229955    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0731 23:57:32.230419    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0731 23:57:32.230419    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0731 23:57:32.230419    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0731 23:57:32.232410    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0731 23:57:32.232573    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0731 23:57:32.232573    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0731 23:57:32.233121    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0731 23:57:32.234017    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0731 23:57:32.234017    9020 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0731 23:57:32.234017    9020 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0731 23:57:32.234836    9020 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0731 23:57:32.235953    9020 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-411400-m02 san=[127.0.0.1 172.17.23.93 localhost minikube multinode-411400-m02]
	I0731 23:57:32.347842    9020 provision.go:177] copyRemoteCerts
	I0731 23:57:32.360165    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:57:32.360165    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:34.464608    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:34.464608    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:34.465609    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:36.937335    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:36.937428    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:36.937922    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:57:37.038513    9020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6782279s)
	I0731 23:57:37.038513    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0731 23:57:37.038820    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 23:57:37.088415    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0731 23:57:37.088415    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0731 23:57:37.131429    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0731 23:57:37.131429    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 23:57:37.176878    9020 provision.go:87] duration metric: took 14.0845097s to configureAuth
	I0731 23:57:37.176878    9020 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:57:37.177829    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:57:37.178102    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:39.240662    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:39.240662    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:39.240779    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:41.702584    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:41.702584    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:41.709210    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:41.709422    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:41.709422    9020 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 23:57:41.833308    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 23:57:41.833308    9020 buildroot.go:70] root file system type: tmpfs
	I0731 23:57:41.833308    9020 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 23:57:41.833308    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:43.894372    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:43.895432    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:43.895432    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:46.351837    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:46.351837    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:46.357803    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:46.358289    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:46.358508    9020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.27.27"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 23:57:46.508005    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.27.27
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 23:57:46.508005    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:48.572007    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:48.572343    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:48.572343    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:51.050585    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:51.050585    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:51.057157    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:57:51.057358    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:57:51.057358    9020 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 23:57:53.410347    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 23:57:53.410347    9020 machine.go:97] duration metric: took 44.7847085s to provisionDockerMachine
	I0731 23:57:53.410347    9020 start.go:293] postStartSetup for "multinode-411400-m02" (driver="hyperv")
	I0731 23:57:53.410347    9020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:57:53.423939    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:57:53.423939    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:57:55.517724    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:57:55.517724    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:55.517724    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:57:57.993554    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:57:57.993554    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:57:57.993554    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:57:58.109764    9020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6856449s)
	I0731 23:57:58.122213    9020 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:57:58.128602    9020 command_runner.go:130] > NAME=Buildroot
	I0731 23:57:58.128843    9020 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 23:57:58.128843    9020 command_runner.go:130] > ID=buildroot
	I0731 23:57:58.128843    9020 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 23:57:58.128843    9020 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 23:57:58.128950    9020 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:57:58.128950    9020 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0731 23:57:58.129443    9020 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0731 23:57:58.130141    9020 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> 123322.pem in /etc/ssl/certs
	I0731 23:57:58.130141    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /etc/ssl/certs/123322.pem
	I0731 23:57:58.142020    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:57:58.161164    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /etc/ssl/certs/123322.pem (1708 bytes)
	I0731 23:57:58.202992    9020 start.go:296] duration metric: took 4.792583s for postStartSetup
	I0731 23:57:58.203053    9020 fix.go:56] duration metric: took 1m26.7708778s for fixHost
	I0731 23:57:58.203053    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:00.249617    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:00.249617    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:00.250226    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:02.713803    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:02.713803    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:02.719658    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:58:02.720912    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:58:02.720912    9020 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:58:02.851877    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722470282.869163899
	
	I0731 23:58:02.851877    9020 fix.go:216] guest clock: 1722470282.869163899
	I0731 23:58:02.851974    9020 fix.go:229] Guest: 2024-07-31 23:58:02.869163899 +0000 UTC Remote: 2024-07-31 23:57:58.2030531 +0000 UTC m=+250.164330401 (delta=4.666110799s)
	I0731 23:58:02.852107    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:04.955430    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:04.955430    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:04.955594    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:07.419751    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:07.420768    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:07.425610    9020 main.go:141] libmachine: Using SSH client type: native
	I0731 23:58:07.426138    9020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11eaa40] 0x11ed620 <nil>  [] 0s} 172.17.23.93 22 <nil> <nil>}
	I0731 23:58:07.426280    9020 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1722470282
	I0731 23:58:07.570849    9020 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jul 31 23:58:02 UTC 2024
	
	I0731 23:58:07.570849    9020 fix.go:236] clock set: Wed Jul 31 23:58:02 UTC 2024
	 (err=<nil>)
	I0731 23:58:07.570849    9020 start.go:83] releasing machines lock for "multinode-411400-m02", held for 1m36.1386996s
	I0731 23:58:07.571860    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:09.681890    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:09.681890    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:09.682751    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:12.140483    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:12.140483    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:12.144088    9020 out.go:177] * Found network options:
	I0731 23:58:12.159004    9020 out.go:177]   - NO_PROXY=172.17.27.27
	W0731 23:58:12.161911    9020 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 23:58:12.164160    9020 out.go:177]   - NO_PROXY=172.17.27.27
	W0731 23:58:12.167340    9020 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 23:58:12.168575    9020 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 23:58:12.170205    9020 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0731 23:58:12.170205    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:12.180883    9020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:58:12.180883    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:14.328189    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:14.328189    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:14.328356    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:14.338915    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:14.338915    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:14.338915    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:16.865874    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:16.865874    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:16.866760    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:58:16.895386    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:16.895598    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:16.896344    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:58:16.950662    9020 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0731 23:58:16.951331    9020 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.7810639s)
	W0731 23:58:16.951331    9020 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0731 23:58:16.986443    9020 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0731 23:58:16.986545    9020 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8056006s)
	W0731 23:58:16.986545    9020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:58:17.000960    9020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:58:17.028958    9020 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0731 23:58:17.029735    9020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:58:17.029779    9020 start.go:495] detecting cgroup driver to use...
	I0731 23:58:17.029992    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0731 23:58:17.061696    9020 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0731 23:58:17.061696    9020 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0731 23:58:17.067436    9020 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0731 23:58:17.080066    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 23:58:17.109687    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 23:58:17.130310    9020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 23:58:17.141350    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 23:58:17.171079    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:58:17.205305    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 23:58:17.238180    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 23:58:17.268589    9020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:58:17.299895    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 23:58:17.331112    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 23:58:17.363965    9020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 23:58:17.398504    9020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:58:17.414579    9020 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 23:58:17.426331    9020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:58:17.456537    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:17.646236    9020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 23:58:17.677662    9020 start.go:495] detecting cgroup driver to use...
	I0731 23:58:17.692941    9020 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 23:58:17.712765    9020 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0731 23:58:17.712765    9020 command_runner.go:130] > [Unit]
	I0731 23:58:17.712765    9020 command_runner.go:130] > Description=Docker Application Container Engine
	I0731 23:58:17.712765    9020 command_runner.go:130] > Documentation=https://docs.docker.com
	I0731 23:58:17.712765    9020 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0731 23:58:17.712765    9020 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0731 23:58:17.712765    9020 command_runner.go:130] > StartLimitBurst=3
	I0731 23:58:17.712765    9020 command_runner.go:130] > StartLimitIntervalSec=60
	I0731 23:58:17.713027    9020 command_runner.go:130] > [Service]
	I0731 23:58:17.713027    9020 command_runner.go:130] > Type=notify
	I0731 23:58:17.713027    9020 command_runner.go:130] > Restart=on-failure
	I0731 23:58:17.713027    9020 command_runner.go:130] > Environment=NO_PROXY=172.17.27.27
	I0731 23:58:17.713027    9020 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0731 23:58:17.713107    9020 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0731 23:58:17.713107    9020 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0731 23:58:17.713107    9020 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0731 23:58:17.713107    9020 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0731 23:58:17.713107    9020 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0731 23:58:17.713107    9020 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0731 23:58:17.713222    9020 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0731 23:58:17.713222    9020 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0731 23:58:17.713222    9020 command_runner.go:130] > ExecStart=
	I0731 23:58:17.713222    9020 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0731 23:58:17.713222    9020 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0731 23:58:17.713222    9020 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0731 23:58:17.713222    9020 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0731 23:58:17.713222    9020 command_runner.go:130] > LimitNOFILE=infinity
	I0731 23:58:17.713222    9020 command_runner.go:130] > LimitNPROC=infinity
	I0731 23:58:17.713222    9020 command_runner.go:130] > LimitCORE=infinity
	I0731 23:58:17.713222    9020 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0731 23:58:17.713222    9020 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0731 23:58:17.713388    9020 command_runner.go:130] > TasksMax=infinity
	I0731 23:58:17.713388    9020 command_runner.go:130] > TimeoutStartSec=0
	I0731 23:58:17.713388    9020 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0731 23:58:17.713388    9020 command_runner.go:130] > Delegate=yes
	I0731 23:58:17.713388    9020 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0731 23:58:17.713388    9020 command_runner.go:130] > KillMode=process
	I0731 23:58:17.713388    9020 command_runner.go:130] > [Install]
	I0731 23:58:17.713388    9020 command_runner.go:130] > WantedBy=multi-user.target
	I0731 23:58:17.724869    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:58:17.757721    9020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:58:17.802141    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:58:17.835425    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:58:17.873719    9020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 23:58:17.938462    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 23:58:17.962106    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:58:17.993778    9020 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0731 23:58:18.006466    9020 ssh_runner.go:195] Run: which cri-dockerd
	I0731 23:58:18.011447    9020 command_runner.go:130] > /usr/bin/cri-dockerd
	I0731 23:58:18.023583    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 23:58:18.044901    9020 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 23:58:18.086509    9020 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 23:58:18.276529    9020 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 23:58:18.480819    9020 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 23:58:18.480862    9020 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 23:58:18.529857    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:18.710093    9020 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 23:58:21.363247    9020 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6530532s)
	I0731 23:58:21.374651    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 23:58:21.406334    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:58:21.438100    9020 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 23:58:21.640588    9020 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 23:58:21.833094    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:22.025929    9020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 23:58:22.073671    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 23:58:22.106667    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:22.304764    9020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 23:58:22.407388    9020 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 23:58:22.418912    9020 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 23:58:22.427532    9020 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0731 23:58:22.427532    9020 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 23:58:22.427626    9020 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0731 23:58:22.427626    9020 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0731 23:58:22.427626    9020 command_runner.go:130] > Access: 2024-07-31 23:58:22.352111940 +0000
	I0731 23:58:22.427626    9020 command_runner.go:130] > Modify: 2024-07-31 23:58:22.352111940 +0000
	I0731 23:58:22.427714    9020 command_runner.go:130] > Change: 2024-07-31 23:58:22.356112007 +0000
	I0731 23:58:22.427714    9020 command_runner.go:130] >  Birth: -
	I0731 23:58:22.427714    9020 start.go:563] Will wait 60s for crictl version
	I0731 23:58:22.439004    9020 ssh_runner.go:195] Run: which crictl
	I0731 23:58:22.449994    9020 command_runner.go:130] > /usr/bin/crictl
	I0731 23:58:22.462804    9020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:58:22.512438    9020 command_runner.go:130] > Version:  0.1.0
	I0731 23:58:22.512673    9020 command_runner.go:130] > RuntimeName:  docker
	I0731 23:58:22.512673    9020 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0731 23:58:22.512673    9020 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 23:58:22.512865    9020 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0731 23:58:22.522517    9020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:58:22.559682    9020 command_runner.go:130] > 27.1.1
	I0731 23:58:22.567961    9020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 23:58:22.603762    9020 command_runner.go:130] > 27.1.1
	I0731 23:58:22.608104    9020 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0731 23:58:22.610970    9020 out.go:177]   - env NO_PROXY=172.17.27.27
	I0731 23:58:22.613514    9020 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0731 23:58:22.616974    9020 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0731 23:58:22.616974    9020 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0731 23:58:22.616974    9020 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0731 23:58:22.616974    9020 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5e:d5:76 Flags:up|broadcast|multicast|running}
	I0731 23:58:22.619552    9020 ip.go:210] interface addr: fe80::9de4:671f:bc4a:75b1/64
	I0731 23:58:22.619552    9020 ip.go:210] interface addr: 172.17.16.1/20
	I0731 23:58:22.629115    9020 ssh_runner.go:195] Run: grep 172.17.16.1	host.minikube.internal$ /etc/hosts
	I0731 23:58:22.635550    9020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:58:22.654821    9020 mustload.go:65] Loading cluster: multinode-411400
	I0731 23:58:22.655049    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:58:22.655887    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:58:24.707903    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:24.708802    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:24.708802    9020 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:58:24.709636    9020 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400 for IP: 172.17.23.93
	I0731 23:58:24.709712    9020 certs.go:194] generating shared ca certs ...
	I0731 23:58:24.709712    9020 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:58:24.710365    9020 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0731 23:58:24.710591    9020 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0731 23:58:24.710591    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 23:58:24.711295    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0731 23:58:24.711428    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 23:58:24.711661    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 23:58:24.711661    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem (1338 bytes)
	W0731 23:58:24.712416    9020 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332_empty.pem, impossibly tiny 0 bytes
	I0731 23:58:24.712573    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0731 23:58:24.712915    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0731 23:58:24.713027    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0731 23:58:24.713027    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0731 23:58:24.714200    9020 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem (1708 bytes)
	I0731 23:58:24.714488    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem -> /usr/share/ca-certificates/123322.pem
	I0731 23:58:24.714607    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:24.714852    9020 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem -> /usr/share/ca-certificates/12332.pem
	I0731 23:58:24.715203    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:58:24.767528    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:58:24.816225    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:58:24.857310    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 23:58:24.902498    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\123322.pem --> /usr/share/ca-certificates/123322.pem (1708 bytes)
	I0731 23:58:24.945561    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:58:24.990258    9020 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\12332.pem --> /usr/share/ca-certificates/12332.pem (1338 bytes)
	I0731 23:58:25.044070    9020 ssh_runner.go:195] Run: openssl version
	I0731 23:58:25.052661    9020 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 23:58:25.064021    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123322.pem && ln -fs /usr/share/ca-certificates/123322.pem /etc/ssl/certs/123322.pem"
	I0731 23:58:25.093249    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123322.pem
	I0731 23:58:25.100542    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:58:25.100542    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:49 /usr/share/ca-certificates/123322.pem
	I0731 23:58:25.112153    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123322.pem
	I0731 23:58:25.121004    9020 command_runner.go:130] > 3ec20f2e
	I0731 23:58:25.131199    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123322.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:58:25.160253    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:58:25.191392    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:25.198387    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:25.198764    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:25.209806    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:58:25.218749    9020 command_runner.go:130] > b5213941
	I0731 23:58:25.230640    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:58:25.257897    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12332.pem && ln -fs /usr/share/ca-certificates/12332.pem /etc/ssl/certs/12332.pem"
	I0731 23:58:25.286898    9020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12332.pem
	I0731 23:58:25.294261    9020 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:58:25.294261    9020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:49 /usr/share/ca-certificates/12332.pem
	I0731 23:58:25.307194    9020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12332.pem
	I0731 23:58:25.316519    9020 command_runner.go:130] > 51391683
	I0731 23:58:25.327521    9020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12332.pem /etc/ssl/certs/51391683.0"
	I0731 23:58:25.360208    9020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:58:25.366143    9020 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:58:25.366659    9020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:58:25.366659    9020 kubeadm.go:934] updating node {m02 172.17.23.93 8443 v1.30.3 docker false true} ...
	I0731 23:58:25.367188    9020 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-411400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.23.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:58:25.377903    9020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:58:25.395824    9020 command_runner.go:130] > kubeadm
	I0731 23:58:25.395824    9020 command_runner.go:130] > kubectl
	I0731 23:58:25.395824    9020 command_runner.go:130] > kubelet
	I0731 23:58:25.395824    9020 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:58:25.406854    9020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0731 23:58:25.421818    9020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0731 23:58:25.449816    9020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:58:25.491058    9020 ssh_runner.go:195] Run: grep 172.17.27.27	control-plane.minikube.internal$ /etc/hosts
	I0731 23:58:25.497367    9020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.27.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:58:25.528063    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:25.708148    9020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:58:25.740286    9020 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:58:25.740431    9020 start.go:317] joinCluster: &{Name:multinode-411400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-411400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.27.27 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.16.77 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:58:25.741198    9020 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:58:25.741198    9020 host.go:66] Checking if "multinode-411400-m02" exists ...
	I0731 23:58:25.742019    9020 mustload.go:65] Loading cluster: multinode-411400
	I0731 23:58:25.742568    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:58:25.743132    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:58:27.860780    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:27.860780    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:27.860780    9020 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:58:27.861827    9020 api_server.go:166] Checking apiserver status ...
	I0731 23:58:27.873163    9020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:58:27.873163    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:58:29.998902    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:29.998992    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:29.999203    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:32.479095    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:58:32.479228    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:32.479684    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:58:32.579284    9020 command_runner.go:130] > 1911
	I0731 23:58:32.579595    9020 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.7063716s)
	I0731 23:58:32.590520    9020 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1911/cgroup
	W0731 23:58:32.607768    9020 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1911/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 23:58:32.618398    9020 ssh_runner.go:195] Run: ls
	I0731 23:58:32.626175    9020 api_server.go:253] Checking apiserver healthz at https://172.17.27.27:8443/healthz ...
	I0731 23:58:32.633426    9020 api_server.go:279] https://172.17.27.27:8443/healthz returned 200:
	ok
	I0731 23:58:32.647962    9020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-411400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0731 23:58:32.808520    9020 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-bgnqq, kube-system/kube-proxy-g7tpl
	I0731 23:58:35.845254    9020 command_runner.go:130] > node/multinode-411400-m02 cordoned
	I0731 23:58:35.846056    9020 command_runner.go:130] > pod "busybox-fc5497c4f-lxslb" has DeletionTimestamp older than 1 seconds, skipping
	I0731 23:58:35.846056    9020 command_runner.go:130] > node/multinode-411400-m02 drained
	I0731 23:58:35.846177    9020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-411400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.198053s)
	I0731 23:58:35.846277    9020 node.go:128] successfully drained node "multinode-411400-m02"
	I0731 23:58:35.846350    9020 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0731 23:58:35.846477    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:58:37.951299    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:37.951299    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:37.952315    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:40.384012    9020 main.go:141] libmachine: [stdout =====>] : 172.17.23.93
	
	I0731 23:58:40.384012    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:40.384766    9020 sshutil.go:53] new ssh client: &{IP:172.17.23.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:58:40.832967    9020 command_runner.go:130] ! W0731 23:58:40.856078    1635 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0731 23:58:41.345135    9020 command_runner.go:130] ! W0731 23:58:41.367177    1635 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 1653d476284eb708686ef6a5a5a7142570e3a21b7242b46d0639cab2724982d8: output: E0731 23:58:41.075523    1673 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-lxslb_default\" network: cni config uninitialized" podSandboxID="1653d476284eb708686ef6a5a5a7142570e3a21b7242b46d0639cab2724982d8"
	I0731 23:58:41.345259    9020 command_runner.go:130] ! time="2024-07-31T23:58:41Z" level=fatal msg="stopping the pod sandbox \"1653d476284eb708686ef6a5a5a7142570e3a21b7242b46d0639cab2724982d8\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-lxslb_default\" network: cni config uninitialized"
	I0731 23:58:41.345259    9020 command_runner.go:130] ! : exit status 1
	I0731 23:58:41.374021    9020 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 23:58:41.374021    9020 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0731 23:58:41.374122    9020 command_runner.go:130] > [reset] Stopping the kubelet service
	I0731 23:58:41.374122    9020 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0731 23:58:41.374122    9020 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0731 23:58:41.374200    9020 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0731 23:58:41.374200    9020 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0731 23:58:41.374200    9020 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0731 23:58:41.374200    9020 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0731 23:58:41.374200    9020 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0731 23:58:41.374200    9020 command_runner.go:130] > to reset your system's IPVS tables.
	I0731 23:58:41.374200    9020 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0731 23:58:41.374200    9020 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0731 23:58:41.374200    9020 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.5277798s)
	I0731 23:58:41.374200    9020 node.go:155] successfully reset node "multinode-411400-m02"
	I0731 23:58:41.375134    9020 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:58:41.376152    9020 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.27.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:58:41.377952    9020 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 23:58:41.378287    9020 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0731 23:58:41.378477    9020 round_trippers.go:463] DELETE https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:41.378525    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:41.378544    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:41.378544    9020 round_trippers.go:473]     Content-Type: application/json
	I0731 23:58:41.378544    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:41.394932    9020 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 23:58:41.394932    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:41.394932    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:41.394932    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Content-Length: 171
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:41 GMT
	I0731 23:58:41.394932    9020 round_trippers.go:580]     Audit-Id: 5ff25fd2-5cf5-4721-8f46-4285b5eb7aec
	I0731 23:58:41.395408    9020 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-411400-m02","kind":"nodes","uid":"a1fd4523-9584-4ce0-a38a-2c510afc3a36"}}
	I0731 23:58:41.395445    9020 node.go:180] successfully deleted node "multinode-411400-m02"
	I0731 23:58:41.395445    9020 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:58:41.395445    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 23:58:41.395551    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:58:43.439282    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:58:43.439472    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:43.439596    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:58:45.904957    9020 main.go:141] libmachine: [stdout =====>] : 172.17.27.27
	
	I0731 23:58:45.904957    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:58:45.905920    9020 sshutil.go:53] new ssh client: &{IP:172.17.27.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:58:46.078857    9020 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fpqpq2.n6fap37a7e7fcvye --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf 
	I0731 23:58:46.079025    9020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6833519s)
	I0731 23:58:46.079051    9020 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:58:46.079129    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fpqpq2.n6fap37a7e7fcvye --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-411400-m02"
	I0731 23:58:46.299769    9020 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:58:47.641371    9020 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 23:58:47.641535    9020 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0731 23:58:47.641573    9020 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0731 23:58:47.641573    9020 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:58:47.641573    9020 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:58:47.641658    9020 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 23:58:47.641658    9020 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 23:58:47.641705    9020 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.004055243s
	I0731 23:58:47.641705    9020 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0731 23:58:47.641705    9020 command_runner.go:130] > This node has joined the cluster:
	I0731 23:58:47.641705    9020 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0731 23:58:47.641705    9020 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0731 23:58:47.641705    9020 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0731 23:58:47.641808    9020 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fpqpq2.n6fap37a7e7fcvye --discovery-token-ca-cert-hash sha256:bd96266b96221067a8269bf37d675397734e40c2bb0955902c4a0085b11a1daf --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-411400-m02": (1.5626592s)
	I0731 23:58:47.641808    9020 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 23:58:47.856650    9020 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0731 23:58:48.049389    9020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-411400-m02 minikube.k8s.io/updated_at=2024_07_31T23_58_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=multinode-411400 minikube.k8s.io/primary=false
	I0731 23:58:48.161969    9020 command_runner.go:130] > node/multinode-411400-m02 labeled
	I0731 23:58:48.162026    9020 start.go:319] duration metric: took 22.4213074s to joinCluster
	I0731 23:58:48.162260    9020 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.17.23.93 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0731 23:58:48.162948    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:58:48.165160    9020 out.go:177] * Verifying Kubernetes components...
	I0731 23:58:48.180661    9020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:58:48.391789    9020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:58:48.416800    9020 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 23:58:48.416800    9020 kapi.go:59] client config for multinode-411400: &rest.Config{Host:"https://172.17.27.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-411400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2696f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:58:48.417807    9020 node_ready.go:35] waiting up to 6m0s for node "multinode-411400-m02" to be "Ready" ...
	I0731 23:58:48.417807    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:48.417807    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:48.417807    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:48.417807    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:48.422134    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:58:48.422134    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:48.422134    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:48 GMT
	I0731 23:58:48.422134    9020 round_trippers.go:580]     Audit-Id: 97b208d4-c7b7-47ff-86a1-3469207dd103
	I0731 23:58:48.422134    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:48.422134    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:48.422134    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:48.422134    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:48.422392    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:48.924542    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:48.924619    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:48.924619    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:48.924619    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:48.927375    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:48.928196    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:48.928196    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:48.928196    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:48.928196    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:48 GMT
	I0731 23:58:48.928196    9020 round_trippers.go:580]     Audit-Id: d44b82a7-8479-44ba-af77-211fdbbd4026
	I0731 23:58:48.928196    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:48.928196    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:48.928455    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:49.423488    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:49.423488    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:49.423488    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:49.423488    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:49.428267    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:58:49.428267    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:49.428267    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:49.428267    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:49.428267    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:49 GMT
	I0731 23:58:49.428267    9020 round_trippers.go:580]     Audit-Id: cef3f9d6-7fda-4634-8599-9dc663ce3cd5
	I0731 23:58:49.428267    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:49.428267    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:49.430063    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:49.923430    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:49.923785    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:49.923785    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:49.923785    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:49.927252    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:49.927828    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:49.927828    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:49.927828    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:49.927828    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:49.927828    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:49.927828    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:49 GMT
	I0731 23:58:49.927828    9020 round_trippers.go:580]     Audit-Id: e2e1cb25-b798-4026-9fb7-3e9f99e822e4
	I0731 23:58:49.928058    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:50.429609    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:50.429872    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:50.429872    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:50.429872    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:50.433192    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:50.433192    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:50.433192    9020 round_trippers.go:580]     Audit-Id: 8a8ed614-6180-46c4-89de-5d9f960e5507
	I0731 23:58:50.433192    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:50.433192    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:50.433192    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:50.433192    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:50.433192    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:50 GMT
	I0731 23:58:50.434190    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:50.434190    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:50.932365    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:50.932631    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:50.932631    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:50.932631    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:50.935309    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:50.935309    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:50.935309    9020 round_trippers.go:580]     Audit-Id: 779f9a9d-07ae-45ec-b9f7-85c3dfdb88bb
	I0731 23:58:50.935309    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:50.935535    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:50.935535    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:50.935535    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:50.935535    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:50 GMT
	I0731 23:58:50.935762    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:51.423910    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:51.424223    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:51.424223    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:51.424223    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:51.426610    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:51.426610    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:51.427544    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:51.427544    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:51.427544    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:51 GMT
	I0731 23:58:51.427544    9020 round_trippers.go:580]     Audit-Id: aa9edc84-854f-4071-ba9b-ecb8913e7019
	I0731 23:58:51.427684    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:51.427684    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:51.427855    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:51.925409    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:51.925503    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:51.925503    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:51.925571    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:51.928105    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:51.928205    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:51.928205    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:51.928205    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:51.928244    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:51 GMT
	I0731 23:58:51.928244    9020 round_trippers.go:580]     Audit-Id: b6447ea5-71d4-460e-a224-ebb4d20165eb
	I0731 23:58:51.928244    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:51.928244    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:51.928509    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2075","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0731 23:58:52.425890    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:52.425890    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:52.426018    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:52.426018    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:52.430345    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:58:52.431055    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:52.431055    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:52.431055    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:52.431055    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:52 GMT
	I0731 23:58:52.431055    9020 round_trippers.go:580]     Audit-Id: 7b8194fe-551b-442d-92a4-63f0c8358bd6
	I0731 23:58:52.431055    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:52.431055    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:52.431194    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:52.926821    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:52.926821    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:52.926899    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:52.926899    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:52.930559    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:52.931246    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:52.931246    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:52 GMT
	I0731 23:58:52.931246    9020 round_trippers.go:580]     Audit-Id: 2d1655bc-ce0e-49c9-b96b-81dde0b3b107
	I0731 23:58:52.931309    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:52.931309    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:52.931309    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:52.931309    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:52.931866    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:52.932470    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:53.428845    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:53.428845    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:53.428845    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:53.428845    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:53.431739    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:53.432450    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:53.432450    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:53.432450    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:53.432450    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:53.432450    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:53 GMT
	I0731 23:58:53.432450    9020 round_trippers.go:580]     Audit-Id: 880d6467-9cc4-477b-868e-978da0ea6038
	I0731 23:58:53.432450    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:53.432642    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:53.928096    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:53.928096    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:53.928179    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:53.928179    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:53.934268    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:58:53.934416    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:53.934416    9020 round_trippers.go:580]     Audit-Id: f446bb52-6a2b-439b-84b0-f68d0551617b
	I0731 23:58:53.934416    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:53.934416    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:53.934416    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:53.934416    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:53.934416    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:53 GMT
	I0731 23:58:53.934574    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:54.427915    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:54.427915    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:54.427915    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:54.427915    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:54.430044    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:54.431037    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:54.431060    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:54.431060    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:54.431060    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:54 GMT
	I0731 23:58:54.431060    9020 round_trippers.go:580]     Audit-Id: 54b3e86f-7651-4edf-ade0-184cc5893ed3
	I0731 23:58:54.431060    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:54.431060    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:54.431305    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:54.927678    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:54.927678    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:54.927678    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:54.927794    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:54.931237    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:54.931721    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:54.931721    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:54.931721    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:54 GMT
	I0731 23:58:54.931721    9020 round_trippers.go:580]     Audit-Id: 184110fe-97e7-429e-a2a9-771fc31746af
	I0731 23:58:54.931721    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:54.931799    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:54.931799    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:54.932482    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:54.933025    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:55.428377    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:55.428441    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:55.428441    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:55.428441    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:55.432426    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:55.432426    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:55.432426    9020 round_trippers.go:580]     Audit-Id: 01a1c1d3-bab3-489a-a97b-fcdb255bf3fd
	I0731 23:58:55.432426    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:55.432490    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:55.432490    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:55.432490    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:55.432529    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:55 GMT
	I0731 23:58:55.432854    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:55.926083    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:55.926083    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:55.926217    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:55.926217    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:55.929610    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:55.930072    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:55.930072    9020 round_trippers.go:580]     Audit-Id: 6f6429f7-b703-4ba2-b1d5-7b679c74f6ee
	I0731 23:58:55.930072    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:55.930072    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:55.930072    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:55.930072    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:55.930133    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:55 GMT
	I0731 23:58:55.930206    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:56.423647    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:56.423647    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:56.423647    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:56.423647    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:56.426377    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:56.427092    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:56.427092    9020 round_trippers.go:580]     Audit-Id: acd37543-58f2-4106-a5b6-ebd7057efddc
	I0731 23:58:56.427092    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:56.427092    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:56.427092    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:56.427092    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:56.427092    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:56 GMT
	I0731 23:58:56.427499    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:56.921075    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:56.921075    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:56.921075    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:56.921075    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:56.924780    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:56.924928    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:56.924928    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:56.924928    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:56 GMT
	I0731 23:58:56.924928    9020 round_trippers.go:580]     Audit-Id: c736b7e7-4aa7-4b9c-9485-7700e79e1e1c
	I0731 23:58:56.924928    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:56.924928    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:56.924928    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:56.925152    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:57.420202    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:57.420263    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:57.420263    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:57.420263    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:57.424689    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:58:57.424804    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:57.424804    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:57 GMT
	I0731 23:58:57.424804    9020 round_trippers.go:580]     Audit-Id: 21d6d737-db6b-4f65-a4b3-aaae3ed97da7
	I0731 23:58:57.424804    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:57.424804    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:57.424804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:57.424804    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:57.424982    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2098","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0731 23:58:57.425530    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:57.920338    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:57.920527    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:57.920527    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:57.920527    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:57.927177    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:58:57.927177    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:57.927177    9020 round_trippers.go:580]     Audit-Id: 16b9e477-8a8a-46ac-b6cf-a172c4810466
	I0731 23:58:57.927177    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:57.927177    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:57.927177    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:57.927177    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:57.927177    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:57 GMT
	I0731 23:58:57.927878    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:58:58.431900    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:58.431968    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:58.431968    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:58.431968    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:58.435707    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:58.435707    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:58.435707    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:58.435707    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:58.435707    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:58.435707    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:58 GMT
	I0731 23:58:58.435707    9020 round_trippers.go:580]     Audit-Id: e3655bec-6573-425f-96cc-7e96d43aad47
	I0731 23:58:58.435707    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:58.435707    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:58:58.930450    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:58.930746    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:58.930746    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:58.930746    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:58.934275    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:58.934275    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:58.934275    9020 round_trippers.go:580]     Audit-Id: b1804a51-172d-4bd4-ba5d-1617d1498987
	I0731 23:58:58.934428    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:58.934428    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:58.934428    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:58.934428    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:58.934428    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:58 GMT
	I0731 23:58:58.934536    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:58:59.432686    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:59.432686    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:59.432686    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:59.432686    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:59.435453    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:58:59.436019    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:59.436019    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:59.436019    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:59 GMT
	I0731 23:58:59.436019    9020 round_trippers.go:580]     Audit-Id: aff48edb-6e80-4ee7-96da-455e6b656597
	I0731 23:58:59.436019    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:59.436019    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:59.436019    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:59.436190    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:58:59.436629    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:58:59.932258    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:58:59.932258    9020 round_trippers.go:469] Request Headers:
	I0731 23:58:59.932258    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:58:59.932258    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:58:59.936754    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:58:59.936754    9020 round_trippers.go:577] Response Headers:
	I0731 23:58:59.936754    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:58:59 GMT
	I0731 23:58:59.936754    9020 round_trippers.go:580]     Audit-Id: 5a1d371d-d437-431f-8dcc-6d4120e0f415
	I0731 23:58:59.936754    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:58:59.936754    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:58:59.936754    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:58:59.936836    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:58:59.936836    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:00.418551    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:00.418825    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:00.418825    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:00.418825    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:00.425768    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:59:00.425768    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:00.425768    9020 round_trippers.go:580]     Audit-Id: a946f78e-fe6f-4a36-ae33-a5c344754f81
	I0731 23:59:00.425768    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:00.425768    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:00.425768    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:00.425768    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:00.425768    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:00 GMT
	I0731 23:59:00.427751    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:00.919094    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:00.919094    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:00.919094    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:00.919094    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:00.923522    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:00.923611    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:00.923611    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:00.923611    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:00.923611    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:00.923611    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:00 GMT
	I0731 23:59:00.923611    9020 round_trippers.go:580]     Audit-Id: cd248c2e-9b7b-4425-924c-618fca420017
	I0731 23:59:00.923611    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:00.923611    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:01.432428    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:01.432428    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:01.432578    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:01.432578    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:01.435595    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:01.435595    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:01.435595    9020 round_trippers.go:580]     Audit-Id: a06e4862-bff1-4d00-bc65-f171fc4c8696
	I0731 23:59:01.435595    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:01.435595    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:01.435595    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:01.435595    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:01.435595    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:01 GMT
	I0731 23:59:01.435781    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:01.919993    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:01.919993    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:01.919993    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:01.919993    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:01.926043    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:59:01.926043    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:01.926043    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:01.926043    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:01 GMT
	I0731 23:59:01.926043    9020 round_trippers.go:580]     Audit-Id: 4c748ced-92bd-41a8-93fb-cf46866743d3
	I0731 23:59:01.926043    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:01.926043    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:01.926043    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:01.926576    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:01.926688    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:59:02.433682    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:02.434091    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:02.434091    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:02.434091    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:02.437136    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:02.437205    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:02.437205    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:02.437275    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:02 GMT
	I0731 23:59:02.437275    9020 round_trippers.go:580]     Audit-Id: 0d5a2be0-7070-4180-856e-4fa4ab058c29
	I0731 23:59:02.437275    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:02.437275    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:02.437275    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:02.437641    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:02.931720    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:02.931822    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:02.931822    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:02.931822    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:02.935387    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:02.936050    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:02.936050    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:02.936050    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:02.936050    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:02.936050    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:02.936050    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:02 GMT
	I0731 23:59:02.936050    9020 round_trippers.go:580]     Audit-Id: 6e3fd155-1389-44cf-90bd-b127ece7fd87
	I0731 23:59:02.936366    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:03.427963    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:03.427963    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:03.427963    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:03.427963    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:03.432896    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:03.432896    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:03.432896    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:03.432961    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:03.432961    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:03.432961    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:03 GMT
	I0731 23:59:03.433001    9020 round_trippers.go:580]     Audit-Id: def46f35-b318-4de9-b58c-3ef3a7737823
	I0731 23:59:03.433030    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:03.433030    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:03.927524    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:03.927718    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:03.927718    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:03.927718    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:03.934462    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:59:03.934511    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:03.934511    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:03 GMT
	I0731 23:59:03.934511    9020 round_trippers.go:580]     Audit-Id: f456134f-de0a-46ed-9c21-672a03ab69b5
	I0731 23:59:03.934511    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:03.934511    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:03.934511    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:03.934511    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:03.934680    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:03.935223    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:59:04.426519    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:04.426681    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:04.426681    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:04.426681    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:04.430862    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:04.430862    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:04.430862    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:04.430862    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:04.430862    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:04.430862    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:04 GMT
	I0731 23:59:04.430862    9020 round_trippers.go:580]     Audit-Id: 4c51f8e9-6165-46b7-9f71-1e11f400056e
	I0731 23:59:04.430862    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:04.430862    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:04.928153    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:04.928434    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:04.928434    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:04.928434    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:04.931993    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:04.932374    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:04.932374    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:04.932374    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:04.932374    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:04 GMT
	I0731 23:59:04.932374    9020 round_trippers.go:580]     Audit-Id: da76c4af-fcbe-4462-ad2b-44673786c161
	I0731 23:59:04.932450    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:04.932450    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:04.932523    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:05.427915    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:05.428110    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:05.428110    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:05.428110    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:05.430907    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:05.431121    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:05.431121    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:05.431121    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:05 GMT
	I0731 23:59:05.431121    9020 round_trippers.go:580]     Audit-Id: 8905507e-7335-4db7-b7c2-6fbba6373057
	I0731 23:59:05.431121    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:05.431121    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:05.431121    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:05.431298    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:05.926984    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:05.927252    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:05.927252    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:05.927252    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:05.931535    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:05.931535    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:05.931535    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:05 GMT
	I0731 23:59:05.931535    9020 round_trippers.go:580]     Audit-Id: d22ff78f-cb31-48c6-ab8c-ccd7fe858dde
	I0731 23:59:05.931535    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:05.931782    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:05.931782    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:05.931782    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:05.931911    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:06.426988    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:06.427104    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:06.427104    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:06.427104    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:06.430690    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:06.430743    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:06.430743    9020 round_trippers.go:580]     Audit-Id: f1ed9130-f7d8-448a-9d6a-d9f9232759b6
	I0731 23:59:06.430743    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:06.430743    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:06.430743    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:06.430743    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:06.430815    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:06 GMT
	I0731 23:59:06.430946    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:06.431463    9020 node_ready.go:53] node "multinode-411400-m02" has status "Ready":"False"
	I0731 23:59:06.925188    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:06.925417    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:06.925417    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:06.925417    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:06.929269    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:06.929269    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:06.929660    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:06.929660    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:06.929660    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:06 GMT
	I0731 23:59:06.929660    9020 round_trippers.go:580]     Audit-Id: df649e0a-9af6-43d5-9695-1ed43c790dc7
	I0731 23:59:06.929660    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:06.929660    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:06.929833    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:07.424387    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:07.424387    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.424387    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.424387    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.426963    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:07.427947    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.427947    9020 round_trippers.go:580]     Audit-Id: 1eff860c-590f-4c88-8453-faf246377be3
	I0731 23:59:07.427947    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.427947    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.427947    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.427947    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.427947    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.428156    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2107","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0731 23:59:07.930096    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:07.930326    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.930326    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.930326    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.933880    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:07.934728    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.934728    9020 round_trippers.go:580]     Audit-Id: 96e26882-a6cf-4339-aa16-16a6f8555e5e
	I0731 23:59:07.934728    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.934728    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.934728    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.934728    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.934728    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.934888    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2118","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0731 23:59:07.935418    9020 node_ready.go:49] node "multinode-411400-m02" has status "Ready":"True"
	I0731 23:59:07.935418    9020 node_ready.go:38] duration metric: took 19.5173608s for node "multinode-411400-m02" to be "Ready" ...
	I0731 23:59:07.935610    9020 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:59:07.935610    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods
	I0731 23:59:07.935769    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.935769    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.935769    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.940101    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:07.941023    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.941023    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.941023    9020 round_trippers.go:580]     Audit-Id: 380fc529-a5ea-4bd4-8856-3567eb717c53
	I0731 23:59:07.941023    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.941023    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.941023    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.941023    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.944222    9020 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2121"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86034 chars]
	I0731 23:59:07.950566    9020 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.950566    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8gtw
	I0731 23:59:07.950566    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.950566    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.950566    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.955800    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:59:07.955800    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.955800    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.955800    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.955800    9020 round_trippers.go:580]     Audit-Id: 1ba7ee57-d553-4326-b4a8-7201cce5361b
	I0731 23:59:07.955800    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.955800    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.955800    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.955800    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-z8gtw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"41ddb3a7-8405-49e7-88fb-41ab6278e4af","resourceVersion":"1920","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"460bf464-bb37-4a96-95ed-57db3ddfd633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"460bf464-bb37-4a96-95ed-57db3ddfd633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6786 chars]
	I0731 23:59:07.956531    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:07.956531    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.956531    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.956531    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.958753    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:07.958753    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.958753    9020 round_trippers.go:580]     Audit-Id: c6347ecd-45fb-495b-8c24-543440c0266d
	I0731 23:59:07.958753    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.958753    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.958753    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.958753    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.958753    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.958753    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:07.959739    9020 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:07.959739    9020 pod_ready.go:81] duration metric: took 9.1737ms for pod "coredns-7db6d8ff4d-z8gtw" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.959739    9020 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.959739    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-411400
	I0731 23:59:07.959739    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.959739    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.959739    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.962964    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:07.962964    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.962964    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.962964    9020 round_trippers.go:580]     Audit-Id: 7faf5493-8b3a-4ae2-982f-16640fd8542f
	I0731 23:59:07.962964    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.962964    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.962964    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.962964    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.962964    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-411400","namespace":"kube-system","uid":"4de1ad7a-3a8e-4823-9430-fadd76753763","resourceVersion":"1862","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.27.27:2379","kubernetes.io/config.hash":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.mirror":"e4537b9252538fcc2aa00b9101cd0b02","kubernetes.io/config.seen":"2024-07-31T23:55:48.969840438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6149 chars]
	I0731 23:59:07.964309    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:07.964309    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.964388    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.964388    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.972811    9020 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 23:59:07.972811    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.972869    9020 round_trippers.go:580]     Audit-Id: f73cb914-aade-47d6-873c-ff4832873e2e
	I0731 23:59:07.972869    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.972869    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.972869    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.972869    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.972869    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.973065    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:07.973698    9020 pod_ready.go:92] pod "etcd-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:07.973698    9020 pod_ready.go:81] duration metric: took 13.9586ms for pod "etcd-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.973895    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.973983    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-411400
	I0731 23:59:07.974014    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.974014    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.974014    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.976300    9020 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 23:59:07.976300    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.976300    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.976300    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.976300    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:07 GMT
	I0731 23:59:07.976300    9020 round_trippers.go:580]     Audit-Id: d44591f8-7847-48c4-83d6-193ef4b32f70
	I0731 23:59:07.976300    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.976300    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.976300    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-411400","namespace":"kube-system","uid":"eaabee4a-7fb0-455f-b354-3fae71ca2878","resourceVersion":"1864","creationTimestamp":"2024-07-31T23:55:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.27.27:8443","kubernetes.io/config.hash":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.mirror":"80f5145283ba4f148f7c29ec99b8490b","kubernetes.io/config.seen":"2024-07-31T23:55:48.898321781Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:55:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7685 chars]
	I0731 23:59:07.976300    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:07.976300    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.976300    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.976300    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.979963    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:07.979963    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.979963    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.980303    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:07.980303    9020 round_trippers.go:580]     Audit-Id: 721d0be3-2be9-484d-927d-216334075dd3
	I0731 23:59:07.980303    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.980303    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.980303    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.980616    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:07.980715    9020 pod_ready.go:92] pod "kube-apiserver-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:07.980715    9020 pod_ready.go:81] duration metric: took 6.8194ms for pod "kube-apiserver-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.980715    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:07.980715    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-411400
	I0731 23:59:07.980715    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.980715    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:07.980715    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.985379    9020 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 23:59:07.985379    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:07.985379    9020 round_trippers.go:580]     Audit-Id: 3423b9b4-fd61-40bb-915c-1b1125103937
	I0731 23:59:07.985379    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:07.985379    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:07.985379    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:07.986236    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:07.986236    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:07.986536    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-411400","namespace":"kube-system","uid":"217a4087-49b2-4b74-a094-e027a51cf503","resourceVersion":"1891","creationTimestamp":"2024-07-31T23:32:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.mirror":"8af5891e3c7d5a17a0be3d02218a4910","kubernetes.io/config.seen":"2024-07-31T23:32:18.716560513Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0731 23:59:07.987227    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:07.987227    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:07.987287    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:07.987287    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.007380    9020 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0731 23:59:08.007779    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.007779    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.007779    9020 round_trippers.go:580]     Audit-Id: 374b06b6-a342-4797-a7fa-37af41ee3c23
	I0731 23:59:08.007779    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.007779    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.007779    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.007779    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.007980    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:08.008162    9020 pod_ready.go:92] pod "kube-controller-manager-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:08.008162    9020 pod_ready.go:81] duration metric: took 27.4466ms for pod "kube-controller-manager-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.008162    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.133531    9020 request.go:629] Waited for 125.2461ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:59:08.133650    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5j8pv
	I0731 23:59:08.133650    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.133650    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.133650    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.137619    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:08.137619    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.137619    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.137619    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.137726    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.137726    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.137726    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.137726    9020 round_trippers.go:580]     Audit-Id: e7fd4fd0-0730-48fc-a1f8-04edeead89d5
	I0731 23:59:08.138121    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5j8pv","generateName":"kube-proxy-","namespace":"kube-system","uid":"761c8479-d25f-4142-93b6-23b0d1e3ccb7","resourceVersion":"1748","creationTimestamp":"2024-07-31T23:40:31Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0731 23:59:08.337338    9020 request.go:629] Waited for 198.5768ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:59:08.337338    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m03
	I0731 23:59:08.337540    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.337540    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.337540    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.340849    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:08.341493    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.341493    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.341493    9020 round_trippers.go:580]     Audit-Id: 512b6bd2-2ed6-4988-9dc0-a25e694716d8
	I0731 23:59:08.341493    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.341493    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.341493    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.341493    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.341668    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m03","uid":"3753504a-97f6-4be0-809b-ee84cbf38121","resourceVersion":"1888","creationTimestamp":"2024-07-31T23:51:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_51_16_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:51:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4398 chars]
	I0731 23:59:08.341668    9020 pod_ready.go:97] node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:59:08.341668    9020 pod_ready.go:81] duration metric: took 333.5016ms for pod "kube-proxy-5j8pv" in "kube-system" namespace to be "Ready" ...
	E0731 23:59:08.342213    9020 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-411400-m03" hosting pod "kube-proxy-5j8pv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-411400-m03" has status "Ready":"Unknown"
	I0731 23:59:08.342213    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.540222    9020 request.go:629] Waited for 197.7939ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:59:08.540305    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-chdxg
	I0731 23:59:08.540305    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.540305    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.540305    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.543355    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:08.543404    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.543404    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.543404    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.543404    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.543404    9020 round_trippers.go:580]     Audit-Id: 0ad88411-b846-49dd-8c2a-452d8ab619d4
	I0731 23:59:08.543404    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.543404    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.544272    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-chdxg","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3405391-f4cb-4ffe-8d51-d669e37d0a3b","resourceVersion":"1853","creationTimestamp":"2024-07-31T23:32:41Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0731 23:59:08.744654    9020 request.go:629] Waited for 199.2863ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:08.744948    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:08.744948    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.744948    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.744948    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.749260    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:08.749260    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.749260    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.749260    9020 round_trippers.go:580]     Audit-Id: 795928f9-fb69-4d0a-9333-05a3fa77941f
	I0731 23:59:08.749260    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.749260    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.749260    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.749260    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.749630    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:08.750281    9020 pod_ready.go:92] pod "kube-proxy-chdxg" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:08.750307    9020 pod_ready.go:81] duration metric: took 408.0888ms for pod "kube-proxy-chdxg" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.750307    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:08.931506    9020 request.go:629] Waited for 180.9231ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:59:08.931601    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7tpl
	I0731 23:59:08.931601    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:08.931601    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:08.931601    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:08.937147    9020 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 23:59:08.937147    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:08.937403    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:08.937403    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:08.937403    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:08.937403    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:08.937403    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:08 GMT
	I0731 23:59:08.937403    9020 round_trippers.go:580]     Audit-Id: 5a0fb46a-0fe5-4bbc-ad09-b79b9c41c9aa
	I0731 23:59:08.937519    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g7tpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"c8356e2e-b324-4001-9b82-18a13b436517","resourceVersion":"2087","creationTimestamp":"2024-07-31T23:35:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229e4f7-e675-49fb-bff5-a5ef99e7b482","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:35:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229e4f7-e675-49fb-bff5-a5ef99e7b482\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0731 23:59:09.132453    9020 request.go:629] Waited for 194.3664ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:09.132962    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400-m02
	I0731 23:59:09.132962    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:09.132962    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:09.132962    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:09.139757    9020 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 23:59:09.139757    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:09.139757    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:09 GMT
	I0731 23:59:09.139757    9020 round_trippers.go:580]     Audit-Id: 0a2fd36f-2010-4b1f-8284-9276a7950b50
	I0731 23:59:09.139757    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:09.139757    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:09.139757    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:09.139757    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:09.140431    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400-m02","uid":"7b7215a1-eec1-4e85-9e71-63b16f2d523e","resourceVersion":"2118","creationTimestamp":"2024-07-31T23:58:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_31T23_58_48_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:58:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0731 23:59:09.140431    9020 pod_ready.go:92] pod "kube-proxy-g7tpl" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:09.140971    9020 pod_ready.go:81] duration metric: took 390.6589ms for pod "kube-proxy-g7tpl" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:09.140971    9020 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:09.336915    9020 request.go:629] Waited for 195.7299ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:59:09.337145    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-411400
	I0731 23:59:09.337145    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:09.337145    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:09.337145    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:09.344839    9020 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 23:59:09.344839    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:09.344839    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:09.344839    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:09.344839    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:09.344839    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:09.344839    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:09 GMT
	I0731 23:59:09.344839    9020 round_trippers.go:580]     Audit-Id: 44361e7d-f7c3-4627-bbb5-e841f5f15b8a
	I0731 23:59:09.345583    9020 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-411400","namespace":"kube-system","uid":"a10cf66c-3049-48d4-9ab1-8667efc59977","resourceVersion":"1875","creationTimestamp":"2024-07-31T23:32:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.mirror":"5a7b9f6b458b17867ccfec9f54e0c769","kubernetes.io/config.seen":"2024-07-31T23:32:26.731395457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-31T23:32:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0731 23:59:09.539294    9020 request.go:629] Waited for 193.624ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:09.539881    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes/multinode-411400
	I0731 23:59:09.539881    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:09.539881    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:09.539881    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:09.543256    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:09.543256    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:09.543256    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:09.543256    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:09.543256    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:09 GMT
	I0731 23:59:09.543256    9020 round_trippers.go:580]     Audit-Id: d4a86f30-a175-4944-9211-4d89d5ff57e1
	I0731 23:59:09.543256    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:09.543256    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:09.543782    9020 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-31T23:32:23Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0731 23:59:09.544395    9020 pod_ready.go:92] pod "kube-scheduler-multinode-411400" in "kube-system" namespace has status "Ready":"True"
	I0731 23:59:09.544395    9020 pod_ready.go:81] duration metric: took 403.4189ms for pod "kube-scheduler-multinode-411400" in "kube-system" namespace to be "Ready" ...
	I0731 23:59:09.544395    9020 pod_ready.go:38] duration metric: took 1.6087646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:59:09.544395    9020 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:59:09.558362    9020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:59:09.581786    9020 system_svc.go:56] duration metric: took 37.1708ms WaitForService to wait for kubelet
	I0731 23:59:09.581821    9020 kubeadm.go:582] duration metric: took 21.4192497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:59:09.581821    9020 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:59:09.740424    9020 request.go:629] Waited for 158.4734ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.27.27:8443/api/v1/nodes
	I0731 23:59:09.740563    9020 round_trippers.go:463] GET https://172.17.27.27:8443/api/v1/nodes
	I0731 23:59:09.740692    9020 round_trippers.go:469] Request Headers:
	I0731 23:59:09.740692    9020 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0731 23:59:09.740692    9020 round_trippers.go:473]     Accept: application/json, */*
	I0731 23:59:09.743943    9020 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 23:59:09.743943    9020 round_trippers.go:577] Response Headers:
	I0731 23:59:09.743943    9020 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24aeccd3-0e87-4ade-babd-62acc695877b
	I0731 23:59:09.743943    9020 round_trippers.go:580]     Date: Wed, 31 Jul 2024 23:59:09 GMT
	I0731 23:59:09.743943    9020 round_trippers.go:580]     Audit-Id: 9fe273cd-df84-45f7-a61c-a392f9661ba8
	I0731 23:59:09.743943    9020 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 23:59:09.743943    9020 round_trippers.go:580]     Content-Type: application/json
	I0731 23:59:09.743943    9020 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c46cf77-1cc3-4d97-a802-0fedcbdc174c
	I0731 23:59:09.745935    9020 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2124"},"items":[{"metadata":{"name":"multinode-411400","uid":"1b5083b5-38cb-49b5-a488-eb41fc8bb43e","resourceVersion":"1898","creationTimestamp":"2024-07-31T23:32:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-411400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ad0431a8b539d85eadcca9b60d2c335055e9353c","minikube.k8s.io/name":"multinode-411400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_31T23_32_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15604 chars]
	I0731 23:59:09.746890    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:59:09.746963    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:59:09.746963    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:59:09.746963    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:59:09.746963    9020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:59:09.746963    9020 node_conditions.go:123] node cpu capacity is 2
	I0731 23:59:09.746963    9020 node_conditions.go:105] duration metric: took 165.1393ms to run NodePressure ...
	I0731 23:59:09.746963    9020 start.go:241] waiting for startup goroutines ...
	I0731 23:59:09.747109    9020 start.go:255] writing updated cluster config ...
	I0731 23:59:09.751636    9020 out.go:177] 
	I0731 23:59:09.754893    9020 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:59:09.768181    9020 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:59:09.768414    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:59:09.774883    9020 out.go:177] * Starting "multinode-411400-m03" worker node in "multinode-411400" cluster
	I0731 23:59:09.777435    9020 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 23:59:09.777435    9020 cache.go:56] Caching tarball of preloaded images
	I0731 23:59:09.777435    9020 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 23:59:09.777435    9020 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 23:59:09.778563    9020 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-411400\config.json ...
	I0731 23:59:09.785779    9020 start.go:360] acquireMachinesLock for multinode-411400-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:59:09.786203    9020 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-411400-m03"
	I0731 23:59:09.786468    9020 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:59:09.786468    9020 fix.go:54] fixHost starting: m03
	I0731 23:59:09.786767    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m03 ).state
	I0731 23:59:11.892899    9020 main.go:141] libmachine: [stdout =====>] : Off
	
	I0731 23:59:11.892899    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:11.892899    9020 fix.go:112] recreateIfNeeded on multinode-411400-m03: state=Stopped err=<nil>
	W0731 23:59:11.892899    9020 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:59:11.898823    9020 out.go:177] * Restarting existing hyperv VM for "multinode-411400-m03" ...
	I0731 23:59:11.902632    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-411400-m03
	I0731 23:59:14.947331    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:59:14.947331    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:14.947677    9020 main.go:141] libmachine: Waiting for host to start...
	I0731 23:59:14.947677    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m03 ).state
	I0731 23:59:17.260444    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:59:17.260444    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:17.261240    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 23:59:19.714817    9020 main.go:141] libmachine: [stdout =====>] : 
	I0731 23:59:19.715421    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:20.728042    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m03 ).state
	I0731 23:59:22.905684    9020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:59:22.905684    9020 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:59:22.906353    9020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m03 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Jul 31 23:56:26 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:26.910076629Z" level=info msg="shim disconnected" id=eb701050d6fdad7cbc88ca887781781b5f8708de269644007031a01dff5b564a namespace=moby
	Jul 31 23:56:26 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:26.910853527Z" level=warning msg="cleaning up after shim disconnected" id=eb701050d6fdad7cbc88ca887781781b5f8708de269644007031a01dff5b564a namespace=moby
	Jul 31 23:56:26 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:26.910870727Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.310635108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.310970607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.311052207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.311393807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.321448498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.321510898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.321647298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.321926197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:27 multinode-411400 cri-dockerd[1357]: time="2024-07-31T23:56:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d81ff2bda645e437e54282d4901e26260f1022adb1c8781f8b53f8ac5ef770d7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 23:56:27 multinode-411400 cri-dockerd[1357]: time="2024-07-31T23:56:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/60e2adfdfdf238ad75fcdc729bb55aac52e3f88880c5c25ccafc449bd76b69c4/resolv.conf as [nameserver 172.17.16.1]"
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.825591348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.826466847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.826696347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.826957947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.869537226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.869605326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.869621326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:27 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:27.869734225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:41 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:41.196629291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 23:56:41 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:41.196689193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 23:56:41 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:41.196700493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 23:56:41 multinode-411400 dockerd[1096]: time="2024-07-31T23:56:41.196800096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4db9e365f1808       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   84c5859f1ff10       storage-provisioner
	e01af1f082e0e       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   60e2adfdfdf23       coredns-7db6d8ff4d-z8gtw
	7d225fe3bc69e       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   d81ff2bda645e       busybox-fc5497c4f-4hgmz
	001c36baaaa4d       6f1d07c71fa0f                                                                                         3 minutes ago       Running             kindnet-cni               1                   b84665dd92158       kindnet-j8slc
	a556aed01deeb       55bb025d2cfa5                                                                                         3 minutes ago       Running             kube-proxy                1                   ee2ff9df561ae       kube-proxy-chdxg
	eb701050d6fda       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   84c5859f1ff10       storage-provisioner
	c457f49fa58a4       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      0                   90345b71eaf48       etcd-multinode-411400
	202530d30f51c       3edc18e7b7672                                                                                         3 minutes ago       Running             kube-scheduler            1                   3cd48f7e8f2f8       kube-scheduler-multinode-411400
	ab38eba57cac3       1f6d574d502f3                                                                                         3 minutes ago       Running             kube-apiserver            0                   6776c49036370       kube-apiserver-multinode-411400
	53268c854428d       76932a3b37d7e                                                                                         3 minutes ago       Running             kube-controller-manager   1                   3f2e9889a2452       kube-controller-manager-multinode-411400
	987bcd17ce9fc       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   3be0dbedbcbad       busybox-fc5497c4f-4hgmz
	378f2a6593166       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   7a9f5c5f99578       coredns-7db6d8ff4d-z8gtw
	284902a3378a8       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              26 minutes ago      Exited              kindnet-cni               0                   7c2aeeb2eba1a       kindnet-j8slc
	07b42ba54367f       55bb025d2cfa5                                                                                         27 minutes ago      Exited              kube-proxy                0                   0ae3ab4f2984f       kube-proxy-chdxg
	945a9963cd1c6       76932a3b37d7e                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   785da79d42d73       kube-controller-manager-multinode-411400
	6ce3944d7d13a       3edc18e7b7672                                                                                         27 minutes ago      Exited              kube-scheduler            0                   68e7a182b5fc9       kube-scheduler-multinode-411400
	
	
	==> coredns [378f2a659316] <==
	[INFO] 10.244.1.2:54279 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000210003s
	[INFO] 10.244.1.2:45666 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213203s
	[INFO] 10.244.1.2:52392 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110902s
	[INFO] 10.244.1.2:52229 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000206603s
	[INFO] 10.244.1.2:57725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000054401s
	[INFO] 10.244.1.2:57825 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073201s
	[INFO] 10.244.1.2:50190 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192703s
	[INFO] 10.244.0.3:41508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103802s
	[INFO] 10.244.0.3:36262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000628s
	[INFO] 10.244.0.3:49001 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000185403s
	[INFO] 10.244.0.3:55139 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144002s
	[INFO] 10.244.1.2:48028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102402s
	[INFO] 10.244.1.2:43656 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000603s
	[INFO] 10.244.1.2:53475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128602s
	[INFO] 10.244.1.2:45631 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134502s
	[INFO] 10.244.0.3:56422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115701s
	[INFO] 10.244.0.3:46466 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000212103s
	[INFO] 10.244.0.3:56888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131002s
	[INFO] 10.244.0.3:54485 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000138902s
	[INFO] 10.244.1.2:54884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252204s
	[INFO] 10.244.1.2:42796 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207503s
	[INFO] 10.244.1.2:44407 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073801s
	[INFO] 10.244.1.2:60271 - 5 "PTR IN 1.16.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000059001s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e01af1f082e0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 01aaa6358818cd69629b54977c0657f45152893fa78c9d48c6346ee2574ccc481acb0dcbfbc0e50b53b225d48ae4f1bf11918b0a55e435e4bcc22cf9a5b1dfb7
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33731 - 19627 "HINFO IN 1167229465202017895.1779773613123335354. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033470914s
	
	
	==> describe nodes <==
	Name:               multinode-411400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-411400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-411400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T23_32_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-411400
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:59:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:56:14 +0000   Wed, 31 Jul 2024 23:32:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:56:14 +0000   Wed, 31 Jul 2024 23:32:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:56:14 +0000   Wed, 31 Jul 2024 23:32:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:56:14 +0000   Wed, 31 Jul 2024 23:56:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.27.27
	  Hostname:    multinode-411400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ddcafa9a4a4a4ac69145e9b049fa1ac6
	  System UUID:                64986569-631f-4c4a-a895-51aa6b031756
	  Boot ID:                    ac0ae719-1e2d-4187-950e-387444c6a2af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hgmz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-z8gtw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-411400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m52s
	  kube-system                 kindnet-j8slc                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-411400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-controller-manager-multinode-411400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-chdxg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-411400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-411400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-411400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-411400 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-411400 event: Registered Node multinode-411400 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-411400 status is now: NodeReady
	  Normal  Starting                 3m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m58s)  kubelet          Node multinode-411400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m58s)  kubelet          Node multinode-411400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m58s)  kubelet          Node multinode-411400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m40s                  node-controller  Node multinode-411400 event: Registered Node multinode-411400 in Controller
	
	
	Name:               multinode-411400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-411400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-411400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T23_58_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:58:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-411400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:59:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:59:07 +0000   Wed, 31 Jul 2024 23:58:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:59:07 +0000   Wed, 31 Jul 2024 23:58:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:59:07 +0000   Wed, 31 Jul 2024 23:58:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:59:07 +0000   Wed, 31 Jul 2024 23:59:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.23.93
	  Hostname:    multinode-411400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 caf4b338b4cd4fd88c340e703794c0c8
	  System UUID:                cdb6ffbb-7e6a-b048-96cd-deaf1ec1b465
	  Boot ID:                    12453187-9725-498b-b751-1d6f7ef1dd03
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n8wv9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kindnet-bgnqq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-g7tpl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-411400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-411400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-411400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node multinode-411400-m02 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)  kubelet          Node multinode-411400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)  kubelet          Node multinode-411400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)  kubelet          Node multinode-411400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           55s                node-controller  Node multinode-411400-m02 event: Registered Node multinode-411400-m02 in Controller
	  Normal  NodeReady                39s                kubelet          Node multinode-411400-m02 status is now: NodeReady
	
	
	Name:               multinode-411400-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-411400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-411400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T23_51_16_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:51:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-411400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:52:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 23:51:33 +0000   Wed, 31 Jul 2024 23:53:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 23:51:33 +0000   Wed, 31 Jul 2024 23:53:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 23:51:33 +0000   Wed, 31 Jul 2024 23:53:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 23:51:33 +0000   Wed, 31 Jul 2024 23:53:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.16.77
	  Hostname:    multinode-411400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0916d8e25a4342e39ff0f3e06167910e
	  System UUID:                35ca4ddc-5daa-ed4d-8416-dac1e1f5fb52
	  Boot ID:                    90240a57-5e52-4f36-a738-5f6c0de31247
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cxs2b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-5j8pv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m27s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-411400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-411400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-411400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-411400-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  8m31s (x2 over 8m31s)  kubelet          Node multinode-411400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s (x2 over 8m31s)  kubelet          Node multinode-411400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s (x2 over 8m31s)  kubelet          Node multinode-411400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m26s                  node-controller  Node multinode-411400-m03 event: Registered Node multinode-411400-m03 in Controller
	  Normal  NodeReady                8m13s                  kubelet          Node multinode-411400-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m36s                  node-controller  Node multinode-411400-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m40s                  node-controller  Node multinode-411400-m03 event: Registered Node multinode-411400-m03 in Controller
	
	
	==> dmesg <==
	[  +0.755422] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.639185] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.567260] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 23:55] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.095851] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.061764] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[ +26.194808] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
	[  +0.099741] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.542074] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +0.173434] systemd-fstab-generator[1068]: Ignoring "noauto" option for root device
	[  +0.213269] systemd-fstab-generator[1082]: Ignoring "noauto" option for root device
	[  +2.900630] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[  +0.174772] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.160167] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +0.253012] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +0.849194] systemd-fstab-generator[1477]: Ignoring "noauto" option for root device
	[  +0.088488] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.895784] systemd-fstab-generator[1617]: Ignoring "noauto" option for root device
	[  +1.193481] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.873837] kauditd_printk_skb: 30 callbacks suppressed
	[  +3.442708] systemd-fstab-generator[2448]: Ignoring "noauto" option for root device
	[Jul31 23:56] kauditd_printk_skb: 70 callbacks suppressed
	[ +34.500586] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [c457f49fa58a] <==
	{"level":"info","ts":"2024-07-31T23:55:51.09238Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:55:51.092451Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:55:51.092761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce45f5c58946fe66 switched to configuration voters=(14863556373966683750)"}
	{"level":"info","ts":"2024-07-31T23:55:51.092834Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b51f195d96aa0e6c","local-member-id":"ce45f5c58946fe66","added-peer-id":"ce45f5c58946fe66","added-peer-peer-urls":["https://172.17.20.56:2380"]}
	{"level":"info","ts":"2024-07-31T23:55:51.093Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b51f195d96aa0e6c","local-member-id":"ce45f5c58946fe66","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:55:51.093096Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:55:51.093628Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:55:51.097498Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ce45f5c58946fe66","initial-advertise-peer-urls":["https://172.17.27.27:2380"],"listen-peer-urls":["https://172.17.27.27:2380"],"advertise-client-urls":["https://172.17.27.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.27.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T23:55:51.097742Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T23:55:51.09809Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.27.27:2380"}
	{"level":"info","ts":"2024-07-31T23:55:51.098242Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.27.27:2380"}
	{"level":"info","ts":"2024-07-31T23:55:52.632459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce45f5c58946fe66 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T23:55:52.632768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce45f5c58946fe66 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T23:55:52.633104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce45f5c58946fe66 received MsgPreVoteResp from ce45f5c58946fe66 at term 2"}
	{"level":"info","ts":"2024-07-31T23:55:52.633251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce45f5c58946fe66 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T23:55:52.633419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce45f5c58946fe66 received MsgVoteResp from ce45f5c58946fe66 at term 3"}
	{"level":"info","ts":"2024-07-31T23:55:52.63361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce45f5c58946fe66 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T23:55:52.633771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce45f5c58946fe66 elected leader ce45f5c58946fe66 at term 3"}
	{"level":"info","ts":"2024-07-31T23:55:52.641471Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ce45f5c58946fe66","local-member-attributes":"{Name:multinode-411400 ClientURLs:[https://172.17.27.27:2379]}","request-path":"/0/members/ce45f5c58946fe66/attributes","cluster-id":"b51f195d96aa0e6c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:55:52.641738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:55:52.642536Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:55:52.642703Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T23:55:52.641827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:55:52.646196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T23:55:52.647725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.27.27:2379"}
	
	
	==> kernel <==
	 23:59:47 up 5 min,  0 users,  load average: 0.16, 0.21, 0.09
	Linux multinode-411400 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [001c36baaaa4] <==
	I0731 23:58:58.229177       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:59:08.226910       1 main.go:295] Handling node with IPs: map[172.17.27.27:{}]
	I0731 23:59:08.227209       1 main.go:299] handling current node
	I0731 23:59:08.227313       1 main.go:295] Handling node with IPs: map[172.17.23.93:{}]
	I0731 23:59:08.227438       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:59:08.227801       1 main.go:295] Handling node with IPs: map[172.17.16.77:{}]
	I0731 23:59:08.227964       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:59:18.231251       1 main.go:295] Handling node with IPs: map[172.17.16.77:{}]
	I0731 23:59:18.231319       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:59:18.231832       1 main.go:295] Handling node with IPs: map[172.17.27.27:{}]
	I0731 23:59:18.231865       1 main.go:299] handling current node
	I0731 23:59:18.231880       1 main.go:295] Handling node with IPs: map[172.17.23.93:{}]
	I0731 23:59:18.231940       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:59:28.226819       1 main.go:295] Handling node with IPs: map[172.17.27.27:{}]
	I0731 23:59:28.226906       1 main.go:299] handling current node
	I0731 23:59:28.226958       1 main.go:295] Handling node with IPs: map[172.17.23.93:{}]
	I0731 23:59:28.226974       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:59:28.227453       1 main.go:295] Handling node with IPs: map[172.17.16.77:{}]
	I0731 23:59:28.227542       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:59:38.231633       1 main.go:295] Handling node with IPs: map[172.17.27.27:{}]
	I0731 23:59:38.231686       1 main.go:299] handling current node
	I0731 23:59:38.231720       1 main.go:295] Handling node with IPs: map[172.17.23.93:{}]
	I0731 23:59:38.231726       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:59:38.231962       1 main.go:295] Handling node with IPs: map[172.17.16.77:{}]
	I0731 23:59:38.232027       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [284902a3378a] <==
	I0731 23:52:40.938328       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:52:50.927666       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:52:50.927701       1 main.go:299] handling current node
	I0731 23:52:50.927718       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:52:50.927724       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:52:50.927875       1 main.go:295] Handling node with IPs: map[172.17.16.77:{}]
	I0731 23:52:50.927889       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:53:00.935555       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:53:00.935647       1 main.go:299] handling current node
	I0731 23:53:00.935700       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:53:00.935708       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:53:00.936262       1 main.go:295] Handling node with IPs: map[172.17.16.77:{}]
	I0731 23:53:00.936299       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:53:10.929551       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:53:10.930031       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	I0731 23:53:10.930526       1 main.go:295] Handling node with IPs: map[172.17.16.77:{}]
	I0731 23:53:10.930559       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:53:10.930686       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:53:10.930816       1 main.go:299] handling current node
	I0731 23:53:20.926994       1 main.go:295] Handling node with IPs: map[172.17.16.77:{}]
	I0731 23:53:20.927096       1 main.go:322] Node multinode-411400-m03 has CIDR [10.244.3.0/24] 
	I0731 23:53:20.927411       1 main.go:295] Handling node with IPs: map[172.17.20.56:{}]
	I0731 23:53:20.927480       1 main.go:299] handling current node
	I0731 23:53:20.927497       1 main.go:295] Handling node with IPs: map[172.17.28.42:{}]
	I0731 23:53:20.927668       1 main.go:322] Node multinode-411400-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ab38eba57cac] <==
	I0731 23:55:54.192553       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 23:55:54.193916       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 23:55:54.195328       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 23:55:54.195346       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 23:55:54.199947       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 23:55:54.200253       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 23:55:54.200630       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 23:55:54.204841       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 23:55:54.212703       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 23:55:54.212761       1 aggregator.go:165] initial CRD sync complete...
	I0731 23:55:54.212769       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 23:55:54.212790       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 23:55:54.212795       1 cache.go:39] Caches are synced for autoregister controller
	I0731 23:55:54.224954       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 23:55:54.225877       1 policy_source.go:224] refreshing policies
	I0731 23:55:54.255337       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:55:55.025454       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 23:55:55.528202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.27.27]
	I0731 23:55:55.530656       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 23:55:55.555950       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 23:55:56.967966       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:55:57.189467       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:55:57.207617       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:55:57.313971       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:55:57.323878       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [53268c854428] <==
	I0731 23:56:07.407818       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:56:14.649936       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:56:29.091014       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.141074ms"
	I0731 23:56:29.091583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.501µs"
	I0731 23:56:29.135960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.202µs"
	I0731 23:56:29.195541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.080171ms"
	I0731 23:56:29.196069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="98.303µs"
	I0731 23:56:46.902917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.19845ms"
	I0731 23:56:46.904825       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.701µs"
	I0731 23:58:32.903559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.878254ms"
	I0731 23:58:32.926212       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.467054ms"
	I0731 23:58:32.926563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.1µs"
	E0731 23:58:46.730278       1 gc_controller.go:153] "Failed to get node" err="node \"multinode-411400-m02\" not found" logger="pod-garbage-collector-controller" node="multinode-411400-m02"
	I0731 23:58:47.497447       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-411400-m02\" does not exist"
	I0731 23:58:47.519409       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-411400-m02" podCIDRs=["10.244.1.0/24"]
	I0731 23:58:49.451090       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.1µs"
	I0731 23:59:07.505781       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:59:07.542455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.601µs"
	I0731 23:59:13.463039       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="275.703µs"
	I0731 23:59:13.799547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.901µs"
	I0731 23:59:13.805590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.1µs"
	I0731 23:59:16.398572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.301µs"
	I0731 23:59:16.425434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.001µs"
	I0731 23:59:17.855975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.698795ms"
	I0731 23:59:17.856292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.601µs"
	
	
	==> kube-controller-manager [945a9963cd1c] <==
	I0731 23:33:03.126751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.301µs"
	I0731 23:33:05.072876       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0731 23:35:43.850887       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-411400-m02\" does not exist"
	I0731 23:35:43.875150       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-411400-m02" podCIDRs=["10.244.1.0/24"]
	I0731 23:35:45.099078       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-411400-m02"
	I0731 23:36:13.159760       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:36:39.043225       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.887684ms"
	I0731 23:36:39.085395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.089769ms"
	I0731 23:36:39.085716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.201µs"
	I0731 23:36:42.101445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.824663ms"
	I0731 23:36:42.101837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.5µs"
	I0731 23:36:42.303764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.648117ms"
	I0731 23:36:42.304221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32µs"
	I0731 23:40:31.763911       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-411400-m03\" does not exist"
	I0731 23:40:31.765117       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:40:31.810540       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-411400-m03" podCIDRs=["10.244.2.0/24"]
	I0731 23:40:35.169292       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-411400-m03"
	I0731 23:41:00.743069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:48:35.307207       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:51:09.768626       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:51:15.645412       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:51:15.645517       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-411400-m03\" does not exist"
	I0731 23:51:15.664509       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-411400-m03" podCIDRs=["10.244.3.0/24"]
	I0731 23:51:33.614000       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	I0731 23:53:10.683385       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-411400-m02"
	
	
	==> kube-proxy [07b42ba54367] <==
	I0731 23:32:43.296545       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:32:43.313426       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.20.56"]
	I0731 23:32:43.376657       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:32:43.376767       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:32:43.376822       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:32:43.383647       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:32:43.384448       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:32:43.384541       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:32:43.386410       1 config.go:192] "Starting service config controller"
	I0731 23:32:43.386452       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:32:43.386479       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:32:43.386624       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:32:43.387800       1 config.go:319] "Starting node config controller"
	I0731 23:32:43.387837       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:32:43.487419       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:32:43.488133       1 shared_informer.go:320] Caches are synced for node config
	I0731 23:32:43.487437       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [a556aed01dee] <==
	I0731 23:55:57.206829       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:55:57.271416       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.27.27"]
	I0731 23:55:57.528610       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:55:57.529094       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:55:57.529281       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:55:57.534722       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:55:57.535256       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:55:57.535466       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:55:57.539755       1 config.go:192] "Starting service config controller"
	I0731 23:55:57.540365       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:55:57.542066       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:55:57.542097       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:55:57.544846       1 config.go:319] "Starting node config controller"
	I0731 23:55:57.544891       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:55:57.640598       1 shared_informer.go:320] Caches are synced for service config
	I0731 23:55:57.643330       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:55:57.645007       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [202530d30f51] <==
	I0731 23:55:52.232218       1 serving.go:380] Generated self-signed cert in-memory
	W0731 23:55:54.050192       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 23:55:54.050433       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:55:54.050537       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 23:55:54.050680       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 23:55:54.105401       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 23:55:54.105621       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:55:54.116244       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 23:55:54.118784       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 23:55:54.118959       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 23:55:54.123651       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 23:55:54.224417       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6ce3944d7d13] <==
	E0731 23:32:24.386124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 23:32:24.475869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:32:24.476222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 23:32:24.551748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:32:24.552706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:32:24.652807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:32:24.652916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:32:24.754982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 23:32:24.755242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 23:32:24.795115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:32:24.795160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 23:32:24.809824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 23:32:24.809992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 23:32:24.926720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 23:32:24.927414       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 23:32:24.927383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:32:24.927749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 23:32:24.936525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:32:24.936549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:32:24.979298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:32:24.979424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 23:32:25.030175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 23:32:25.030225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0731 23:32:26.709053       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 23:53:26.915641       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 23:56:10 multinode-411400 kubelet[1624]: E0731 23:56:10.624606    1624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41ddb3a7-8405-49e7-88fb-41ab6278e4af-config-volume podName:41ddb3a7-8405-49e7-88fb-41ab6278e4af nodeName:}" failed. No retries permitted until 2024-07-31 23:56:26.62458865 +0000 UTC m=+37.870013291 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/41ddb3a7-8405-49e7-88fb-41ab6278e4af-config-volume") pod "coredns-7db6d8ff4d-z8gtw" (UID: "41ddb3a7-8405-49e7-88fb-41ab6278e4af") : object "kube-system"/"coredns" not registered
	Jul 31 23:56:10 multinode-411400 kubelet[1624]: E0731 23:56:10.725798    1624 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 31 23:56:10 multinode-411400 kubelet[1624]: E0731 23:56:10.725893    1624 projected.go:200] Error preparing data for projected volume kube-api-access-qs5rn for pod default/busybox-fc5497c4f-4hgmz: object "default"/"kube-root-ca.crt" not registered
	Jul 31 23:56:10 multinode-411400 kubelet[1624]: E0731 23:56:10.725999    1624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5430f4af-5b97-4c7d-90cc-53926f8d496b-kube-api-access-qs5rn podName:5430f4af-5b97-4c7d-90cc-53926f8d496b nodeName:}" failed. No retries permitted until 2024-07-31 23:56:26.725980593 +0000 UTC m=+37.971405234 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qs5rn" (UniqueName: "kubernetes.io/projected/5430f4af-5b97-4c7d-90cc-53926f8d496b-kube-api-access-qs5rn") pod "busybox-fc5497c4f-4hgmz" (UID: "5430f4af-5b97-4c7d-90cc-53926f8d496b") : object "default"/"kube-root-ca.crt" not registered
	Jul 31 23:56:28 multinode-411400 kubelet[1624]: I0731 23:56:28.021392    1624 scope.go:117] "RemoveContainer" containerID="1d63a0cb77d558c74b9beef02a6c58e2c1a747ce5e1e1c4db20be3152dc232ea"
	Jul 31 23:56:28 multinode-411400 kubelet[1624]: I0731 23:56:28.021833    1624 scope.go:117] "RemoveContainer" containerID="eb701050d6fdad7cbc88ca887781781b5f8708de269644007031a01dff5b564a"
	Jul 31 23:56:28 multinode-411400 kubelet[1624]: E0731 23:56:28.022056    1624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f33ea8e6-6b88-471e-a471-d3c4faf9de93)\"" pod="kube-system/storage-provisioner" podUID="f33ea8e6-6b88-471e-a471-d3c4faf9de93"
	Jul 31 23:56:41 multinode-411400 kubelet[1624]: I0731 23:56:41.001842    1624 scope.go:117] "RemoveContainer" containerID="eb701050d6fdad7cbc88ca887781781b5f8708de269644007031a01dff5b564a"
	Jul 31 23:56:49 multinode-411400 kubelet[1624]: E0731 23:56:49.029348    1624 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:56:49 multinode-411400 kubelet[1624]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:56:49 multinode-411400 kubelet[1624]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:56:49 multinode-411400 kubelet[1624]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:56:49 multinode-411400 kubelet[1624]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:56:49 multinode-411400 kubelet[1624]: I0731 23:56:49.039112    1624 scope.go:117] "RemoveContainer" containerID="54a3651cfe8b04e0414115bf27db7fd2765e308460b9950409c6c3942c5d1ba1"
	Jul 31 23:56:49 multinode-411400 kubelet[1624]: I0731 23:56:49.086919    1624 scope.go:117] "RemoveContainer" containerID="534fd9010fca60383792c56f88fe78c67e2fabcf5aff99922c6866b0ddac17de"
	Jul 31 23:57:49 multinode-411400 kubelet[1624]: E0731 23:57:49.028921    1624 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:57:49 multinode-411400 kubelet[1624]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:57:49 multinode-411400 kubelet[1624]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:57:49 multinode-411400 kubelet[1624]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:57:49 multinode-411400 kubelet[1624]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:58:49 multinode-411400 kubelet[1624]: E0731 23:58:49.028543    1624 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:58:49 multinode-411400 kubelet[1624]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:58:49 multinode-411400 kubelet[1624]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:58:49 multinode-411400 kubelet[1624]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:58:49 multinode-411400 kubelet[1624]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:59:38.866237    8540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-411400 -n multinode-411400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-411400 -n multinode-411400: (11.8951356s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-411400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (470.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-271800 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-271800 --driver=hyperv: exit status 1 (4m59.6415605s)

                                                
                                                
-- stdout --
	* [NoKubernetes-271800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-271800" primary control-plane node in "NoKubernetes-271800" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0801 00:16:23.919077    9344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-271800 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-271800 -n NoKubernetes-271800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-271800 -n NoKubernetes-271800: exit status 7 (308.3614ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0801 00:21:23.582637    3316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-271800" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.95s)

                                                
                                    

Test pass (127/195)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.34
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.42
9 TestDownloadOnly/v1.20.0/DeleteAll 1.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.26
12 TestDownloadOnly/v1.30.3/json-events 10.64
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.3
18 TestDownloadOnly/v1.30.3/DeleteAll 1.2
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 1.29
21 TestDownloadOnly/v1.31.0-beta.0/json-events 16.94
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 1.18
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 1.22
30 TestBinaryMirror 6.83
31 TestOffline 293.46
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.3
36 TestAddons/Setup 437.85
38 TestAddons/serial/Volcano 62.28
40 TestAddons/serial/GCPAuth/Namespaces 0.33
43 TestAddons/parallel/Ingress 65.42
44 TestAddons/parallel/InspektorGadget 26.1
45 TestAddons/parallel/MetricsServer 22.04
46 TestAddons/parallel/HelmTiller 30.35
48 TestAddons/parallel/CSI 84.81
49 TestAddons/parallel/Headlamp 40.68
50 TestAddons/parallel/CloudSpanner 20.4
51 TestAddons/parallel/LocalPath 87.34
52 TestAddons/parallel/NvidiaDevicePlugin 20.3
53 TestAddons/parallel/Yakd 27.22
54 TestAddons/StoppedEnableDisable 51.78
59 TestForceSystemdEnv 402.73
66 TestErrorSpam/start 17.09
67 TestErrorSpam/status 36.26
68 TestErrorSpam/pause 22.34
69 TestErrorSpam/unpause 22.18
70 TestErrorSpam/stop 55.64
73 TestFunctional/serial/CopySyncFile 0.03
74 TestFunctional/serial/StartWithProxy 222.13
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 124.86
77 TestFunctional/serial/KubeContext 0.14
78 TestFunctional/serial/KubectlGetPods 0.25
81 TestFunctional/serial/CacheCmd/cache/add_remote 26.02
82 TestFunctional/serial/CacheCmd/cache/add_local 11.13
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
84 TestFunctional/serial/CacheCmd/cache/list 0.26
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.15
86 TestFunctional/serial/CacheCmd/cache/cache_reload 35.97
87 TestFunctional/serial/CacheCmd/cache/delete 0.53
88 TestFunctional/serial/MinikubeKubectlCmd 0.51
92 TestFunctional/serial/LogsCmd 229.2
93 TestFunctional/serial/LogsFileCmd 120.56
105 TestFunctional/parallel/AddonsCmd 0.72
108 TestFunctional/parallel/SSHCmd 19.3
109 TestFunctional/parallel/CpCmd 49.69
111 TestFunctional/parallel/FileSync 9.5
112 TestFunctional/parallel/CertSync 54.02
118 TestFunctional/parallel/NonActiveRuntimeDisabled 8.94
120 TestFunctional/parallel/License 2.78
122 TestFunctional/parallel/ProfileCmd/profile_not_create 11.95
125 TestFunctional/parallel/ProfileCmd/profile_list 11.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 10.38
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
140 TestFunctional/parallel/Version/short 0.24
141 TestFunctional/parallel/Version/components 7.68
147 TestFunctional/parallel/ImageCommands/Setup 2.31
153 TestFunctional/parallel/UpdateContextCmd/no_changes 2.46
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.47
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.43
156 TestFunctional/parallel/ImageCommands/ImageRemove 120.39
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 120.47
159 TestFunctional/delete_echo-server_images 0.37
160 TestFunctional/delete_my-image_image 0.17
161 TestFunctional/delete_minikube_cached_images 0.17
165 TestMultiControlPlane/serial/StartCluster 716.69
166 TestMultiControlPlane/serial/DeployApp 13.52
168 TestMultiControlPlane/serial/AddWorkerNode 264.58
169 TestMultiControlPlane/serial/NodeLabels 0.19
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 28.95
171 TestMultiControlPlane/serial/CopyFile 623.47
172 TestMultiControlPlane/serial/StopSecondaryNode 76.35
179 TestJSONOutput/start/Command 244.47
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 8.03
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 7.9
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 40.23
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 1.45
207 TestMainNoArgs 0.25
208 TestMinikubeProfile 519.39
211 TestMountStart/serial/StartWithMountFirst 149.02
212 TestMountStart/serial/VerifyMountFirst 9.25
213 TestMountStart/serial/StartWithMountSecond 150.5
214 TestMountStart/serial/VerifyMountSecond 9.27
215 TestMountStart/serial/DeleteFirst 29.66
216 TestMountStart/serial/VerifyMountPostDelete 9.19
217 TestMountStart/serial/Stop 29.35
218 TestMountStart/serial/RestartStopped 113.84
219 TestMountStart/serial/VerifyMountPostStop 9.2
222 TestMultiNode/serial/FreshStart2Nodes 433.81
223 TestMultiNode/serial/DeployApp2Nodes 9.5
225 TestMultiNode/serial/AddNode 233.3
226 TestMultiNode/serial/MultiNodeLabels 0.18
227 TestMultiNode/serial/ProfileList 9.47
228 TestMultiNode/serial/CopyFile 355.56
229 TestMultiNode/serial/StopNode 75.5
230 TestMultiNode/serial/StartAfterStop 192.25
235 TestPreload 493.43
236 TestScheduledStopWindows 340.61
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
x
+
TestDownloadOnly/v1.20.0/json-events (20.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-716400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-716400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (20.3352455s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-716400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-716400: exit status 85 (420.7946ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-716400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC |          |
	|         | -p download-only-716400        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:29:28
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:29:28.168151    3900 out.go:291] Setting OutFile to fd 616 ...
	I0731 21:29:28.169093    3900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:29:28.169238    3900 out.go:304] Setting ErrFile to fd 620...
	I0731 21:29:28.169332    3900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 21:29:28.183772    3900 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0731 21:29:28.194781    3900 out.go:298] Setting JSON to true
	I0731 21:29:28.197779    3900 start.go:129] hostinfo: {"hostname":"minikube6","uptime":537309,"bootTime":1721924058,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 21:29:28.197779    3900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 21:29:28.208768    3900 out.go:97] [download-only-716400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 21:29:28.208768    3900 notify.go:220] Checking for updates...
	W0731 21:29:28.208768    3900 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0731 21:29:28.212772    3900 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:29:28.221794    3900 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 21:29:28.228496    3900 out.go:169] MINIKUBE_LOCATION=19312
	I0731 21:29:28.233749    3900 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0731 21:29:28.241686    3900 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 21:29:28.242384    3900 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:29:33.646670    3900 out.go:97] Using the hyperv driver based on user configuration
	I0731 21:29:33.646777    3900 start.go:297] selected driver: hyperv
	I0731 21:29:33.646777    3900 start.go:901] validating driver "hyperv" against <nil>
	I0731 21:29:33.646777    3900 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:29:33.699689    3900 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0731 21:29:33.700386    3900 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 21:29:33.701273    3900 cni.go:84] Creating CNI manager for ""
	I0731 21:29:33.701377    3900 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 21:29:33.701377    3900 start.go:340] cluster config:
	{Name:download-only-716400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-716400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:33.702513    3900 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:29:33.707958    3900 out.go:97] Downloading VM boot image ...
	I0731 21:29:33.708160    3900 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 21:29:38.201152    3900 out.go:97] Starting "download-only-716400" primary control-plane node in "download-only-716400" cluster
	I0731 21:29:38.202089    3900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 21:29:38.243121    3900 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0731 21:29:38.243121    3900 cache.go:56] Caching tarball of preloaded images
	I0731 21:29:38.243121    3900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 21:29:38.247898    3900 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 21:29:38.247898    3900 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:29:38.325041    3900 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0731 21:29:41.280687    3900 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:29:41.282471    3900 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:29:42.194068    3900 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 21:29:42.195065    3900 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-716400\config.json ...
	I0731 21:29:42.195762    3900 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-716400\config.json: {Name:mk45d89b3b1e0e2d897b6f85b445da04a5fff6f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:42.197065    3900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 21:29:42.198053    3900 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-716400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-716400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:29:48.483021    7696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1377275s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-716400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-716400: (1.262205s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (10.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-224000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-224000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperv: (10.6438448s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (10.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-224000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-224000: exit status 85 (295.3991ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-716400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC |                     |
	|         | -p download-only-716400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC | 31 Jul 24 21:29 UTC |
	| delete  | -p download-only-716400        | download-only-716400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC | 31 Jul 24 21:29 UTC |
	| start   | -o=json --download-only        | download-only-224000 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC |                     |
	|         | -p download-only-224000        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:29:51
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:29:51.382447    5748 out.go:291] Setting OutFile to fd 732 ...
	I0731 21:29:51.383080    5748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:29:51.383080    5748 out.go:304] Setting ErrFile to fd 736...
	I0731 21:29:51.383080    5748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:29:51.405523    5748 out.go:298] Setting JSON to true
	I0731 21:29:51.408240    5748 start.go:129] hostinfo: {"hostname":"minikube6","uptime":537333,"bootTime":1721924058,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 21:29:51.408240    5748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 21:29:51.497805    5748 out.go:97] [download-only-224000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 21:29:51.498722    5748 notify.go:220] Checking for updates...
	I0731 21:29:51.501320    5748 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:29:51.504427    5748 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 21:29:51.506608    5748 out.go:169] MINIKUBE_LOCATION=19312
	I0731 21:29:51.510223    5748 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0731 21:29:51.515755    5748 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 21:29:51.516563    5748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:29:56.614660    5748 out.go:97] Using the hyperv driver based on user configuration
	I0731 21:29:56.614660    5748 start.go:297] selected driver: hyperv
	I0731 21:29:56.614660    5748 start.go:901] validating driver "hyperv" against <nil>
	I0731 21:29:56.615468    5748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:29:56.666303    5748 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0731 21:29:56.667500    5748 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 21:29:56.667627    5748 cni.go:84] Creating CNI manager for ""
	I0731 21:29:56.667627    5748 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:29:56.667627    5748 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:29:56.667627    5748 start.go:340] cluster config:
	{Name:download-only-224000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-224000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:56.668146    5748 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:29:56.671331    5748 out.go:97] Starting "download-only-224000" primary control-plane node in "download-only-224000" cluster
	I0731 21:29:56.671331    5748 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:29:56.737640    5748 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 21:29:56.737721    5748 cache.go:56] Caching tarball of preloaded images
	I0731 21:29:56.737974    5748 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:29:56.741189    5748 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 21:29:56.741189    5748 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:29:56.815695    5748 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 21:29:59.837660    5748 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:29:59.838826    5748 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:30:00.699681    5748 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 21:30:00.700673    5748 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-224000\config.json ...
	I0731 21:30:00.701311    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-224000\config.json: {Name:mk84adf53fade6c3db85419f17a73ef71ff6d625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:00.701507    5748 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 21:30:00.702536    5748 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.30.3/kubectl.exe
	
	
	* The control-plane node download-only-224000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-224000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:30:01.954518   12868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1957083s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (1.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-224000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-224000: (1.2924859s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (16.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-756300 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-756300 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperv: (16.9349937s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (16.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-756300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-756300: exit status 85 (284.2994ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-716400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC |                     |
	|         | -p download-only-716400             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC | 31 Jul 24 21:29 UTC |
	| delete  | -p download-only-716400             | download-only-716400 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC | 31 Jul 24 21:29 UTC |
	| start   | -o=json --download-only             | download-only-224000 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:29 UTC |                     |
	|         | -p download-only-224000             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| delete  | -p download-only-224000             | download-only-224000 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC | 31 Jul 24 21:30 UTC |
	| start   | -o=json --download-only             | download-only-756300 | minikube6\jenkins | v1.33.1 | 31 Jul 24 21:30 UTC |                     |
	|         | -p download-only-756300             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:30:04
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:30:04.815176    8548 out.go:291] Setting OutFile to fd 720 ...
	I0731 21:30:04.815776    8548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:30:04.815776    8548 out.go:304] Setting ErrFile to fd 724...
	I0731 21:30:04.815776    8548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:30:04.839994    8548 out.go:298] Setting JSON to true
	I0731 21:30:04.842667    8548 start.go:129] hostinfo: {"hostname":"minikube6","uptime":537346,"bootTime":1721924058,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 21:30:04.842667    8548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 21:30:04.848827    8548 out.go:97] [download-only-756300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 21:30:04.848827    8548 notify.go:220] Checking for updates...
	I0731 21:30:04.851800    8548 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 21:30:04.854772    8548 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 21:30:04.857462    8548 out.go:169] MINIKUBE_LOCATION=19312
	I0731 21:30:04.859865    8548 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0731 21:30:04.869149    8548 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 21:30:04.870011    8548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:30:10.228208    8548 out.go:97] Using the hyperv driver based on user configuration
	I0731 21:30:10.228208    8548 start.go:297] selected driver: hyperv
	I0731 21:30:10.228208    8548 start.go:901] validating driver "hyperv" against <nil>
	I0731 21:30:10.228744    8548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:30:10.275998    8548 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0731 21:30:10.276924    8548 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 21:30:10.276924    8548 cni.go:84] Creating CNI manager for ""
	I0731 21:30:10.276924    8548 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 21:30:10.277106    8548 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:30:10.277106    8548 start.go:340] cluster config:
	{Name:download-only-756300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-756300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0731 21:30:10.277106    8548 iso.go:125] acquiring lock: {Name:mk51465eaa337f49a286b30986b5f3d5f63e6787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:30:10.281095    8548 out.go:97] Starting "download-only-756300" primary control-plane node in "download-only-756300" cluster
	I0731 21:30:10.281095    8548 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 21:30:10.328593    8548 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0731 21:30:10.328593    8548 cache.go:56] Caching tarball of preloaded images
	I0731 21:30:10.329442    8548 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 21:30:10.335446    8548 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 21:30:10.335446    8548 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:30:10.401338    8548 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0731 21:30:13.220084    8548 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:30:13.220973    8548 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 21:30:13.988990    8548 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 21:30:13.990079    8548 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-756300\config.json ...
	I0731 21:30:13.990634    8548 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-756300\config.json: {Name:mk8405ae68155d4e516ab9178d89b2a24e6b7b8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:13.991861    8548 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 21:30:13.992146    8548 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.31.0-beta.0/kubectl.exe
	
	
	* The control-plane node download-only-756300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-756300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:30:21.670579   12848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1787941s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-756300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-756300: (1.2213713s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.22s)

                                                
                                    
x
+
TestBinaryMirror (6.83s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-920900 --alsologtostderr --binary-mirror http://127.0.0.1:52768 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-920900 --alsologtostderr --binary-mirror http://127.0.0.1:52768 --driver=hyperv: (6.0110066s)
helpers_test.go:175: Cleaning up "binary-mirror-920900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-920900
--- PASS: TestBinaryMirror (6.83s)

                                                
                                    
x
+
TestOffline (293.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-482100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-482100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m12.3500865s)
helpers_test.go:175: Cleaning up "offline-docker-482100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-482100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-482100: (41.1078449s)
--- PASS: TestOffline (293.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-608900
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-608900: exit status 85 (289.6965ms)

                                                
                                                
-- stdout --
	* Profile "addons-608900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-608900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:30:35.038874    9440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-608900
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-608900: exit status 85 (294.7537ms)

                                                
                                                
-- stdout --
	* Profile "addons-608900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-608900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:30:35.038874    7460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/Setup (437.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-608900 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-608900 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m17.8466174s)
--- PASS: TestAddons/Setup (437.85s)

                                                
                                    
x
+
TestAddons/serial/Volcano (62.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 34.9352ms
addons_test.go:913: volcano-controller stabilized in 35.1684ms
addons_test.go:897: volcano-scheduler stabilized in 35.2864ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-hk8cn" [e6652526-a2fe-4ee5-9dc8-8a46ed3207bb] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0110023s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-jwqcb" [275a78e2-4d19-42fd-be1b-9205b78db662] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0187863s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-jl26q" [fe318d21-0845-4f6f-8474-cd5ed9acb90b] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0102783s
addons_test.go:932: (dbg) Run:  kubectl --context addons-608900 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-608900 create -f testdata\vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-608900 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4f24b0c0-1647-47de-ae6e-d73bd2eb82d5] Pending
helpers_test.go:344: "test-job-nginx-0" [4f24b0c0-1647-47de-ae6e-d73bd2eb82d5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4f24b0c0-1647-47de-ae6e-d73bd2eb82d5] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 22.0130345s
addons_test.go:968: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable volcano --alsologtostderr -v=1: (24.3414564s)
--- PASS: TestAddons/serial/Volcano (62.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-608900 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-608900 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (65.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-608900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-608900 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-608900 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [64dd319b-fba9-4587-8dbd-83bc8a0ef916] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [64dd319b-fba9-4587-8dbd-83bc8a0ef916] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0150976s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.7442097s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-608900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0731 21:41:13.567152    9844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-608900 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:288: (dbg) Done: kubectl --context addons-608900 replace --force -f testdata\ingress-dns-example-v1.yaml: (1.0879328s)
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 ip: (2.444976s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.17.25.32
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable ingress-dns --alsologtostderr -v=1: (15.1019458s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable ingress --alsologtostderr -v=1: (21.5717775s)
--- PASS: TestAddons/parallel/Ingress (65.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.1s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x55q8" [c84e2781-3586-4aa0-abe9-e6056b6fb4b9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0158831s
addons_test.go:851: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-608900
addons_test.go:851: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-608900: (21.0783577s)
--- PASS: TestAddons/parallel/InspektorGadget (26.10s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.3998ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-95bfh" [4c2abdd2-07c6-4041-8099-cd0895f76102] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0121163s
addons_test.go:417: (dbg) Run:  kubectl --context addons-608900 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable metrics-server --alsologtostderr -v=1: (15.8299098s)
--- PASS: TestAddons/parallel/MetricsServer (22.04s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.746ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-lsj8c" [94c0ae96-47c8-47c6-b205-6927ea9a5d0c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0170956s
addons_test.go:475: (dbg) Run:  kubectl --context addons-608900 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-608900 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.055297s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable helm-tiller --alsologtostderr -v=1: (16.2492877s)
--- PASS: TestAddons/parallel/HelmTiller (30.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (84.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.2989ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-608900 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-608900 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7da03cf3-e4fe-4dfe-9887-ca8f604776b4] Pending
helpers_test.go:344: "task-pv-pod" [7da03cf3-e4fe-4dfe-9887-ca8f604776b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7da03cf3-e4fe-4dfe-9887-ca8f604776b4] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.0154873s
addons_test.go:590: (dbg) Run:  kubectl --context addons-608900 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-608900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-608900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-608900 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-608900 delete pod task-pv-pod: (1.1719153s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-608900 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-608900 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-608900 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [439cfb10-7cca-44fa-b6b9-d103658e4fee] Pending
helpers_test.go:344: "task-pv-pod-restore" [439cfb10-7cca-44fa-b6b9-d103658e4fee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [439cfb10-7cca-44fa-b6b9-d103658e4fee] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0115575s
addons_test.go:632: (dbg) Run:  kubectl --context addons-608900 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-608900 delete pod task-pv-pod-restore: (1.5312734s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-608900 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-608900 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.6577686s)
addons_test.go:648: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable volumesnapshots --alsologtostderr -v=1: (15.0907403s)
--- PASS: TestAddons/parallel/CSI (84.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (40.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-608900 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-608900 --alsologtostderr -v=1: (15.2715757s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-95ztd" [fc747abb-e8c9-4975-a857-d96a42fa671f] Pending
helpers_test.go:344: "headlamp-7867546754-95ztd" [fc747abb-e8c9-4975-a857-d96a42fa671f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-95ztd" [fc747abb-e8c9-4975-a857-d96a42fa671f] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.015706s
addons_test.go:839: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable headlamp --alsologtostderr -v=1: (7.3890503s)
--- PASS: TestAddons/parallel/Headlamp (40.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-bk296" [bb741753-ac18-466e-ab70-0b956adcf110] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0333341s
addons_test.go:870: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-608900
addons_test.go:870: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-608900: (15.3513913s)
--- PASS: TestAddons/parallel/CloudSpanner (20.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (87.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-608900 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-608900 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [25154a9d-77a9-4603-836c-0a9878b07a9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [25154a9d-77a9-4603-836c-0a9878b07a9c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [25154a9d-77a9-4603-836c-0a9878b07a9c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0108947s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-608900 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 ssh "cat /opt/local-path-provisioner/pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133_default_test-pvc/file1"
addons_test.go:1009: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 ssh "cat /opt/local-path-provisioner/pvc-7bd47b74-bb56-4ede-ab4b-c10da648c133_default_test-pvc/file1": (10.8045397s)
addons_test.go:1021: (dbg) Run:  kubectl --context addons-608900 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-608900 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m1.8795157s)
--- PASS: TestAddons/parallel/LocalPath (87.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5z9jr" [8bb64c03-6d41-4b17-bb37-406982e7416e] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0088371s
addons_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-608900
addons_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-608900: (15.2896536s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (27.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-9wm7v" [2ff34d27-baba-4fa7-90f4-43c319526bc5] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0233215s
addons_test.go:1076: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-608900 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-windows-amd64.exe -p addons-608900 addons disable yakd --alsologtostderr -v=1: (21.1916824s)
--- PASS: TestAddons/parallel/Yakd (27.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (51.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-608900
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-608900: (39.4576427s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-608900
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-608900: (4.8205037s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-608900
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-608900: (4.7012632s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-608900
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-608900: (2.7989335s)
--- PASS: TestAddons/StoppedEnableDisable (51.78s)

                                                
                                    
x
+
TestForceSystemdEnv (402.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-915300 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-915300 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m43.4420498s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-915300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-915300 ssh "docker info --format {{.CgroupDriver}}": (10.9407302s)
helpers_test.go:175: Cleaning up "force-systemd-env-915300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-915300
E0801 00:27:53.220155   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-915300: (48.3497646s)
--- PASS: TestForceSystemdEnv (402.73s)

                                                
                                    
x
+
TestErrorSpam/start (17.09s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 start --dry-run: (5.7639989s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 start --dry-run: (5.6587721s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 start --dry-run: (5.6662977s)
--- PASS: TestErrorSpam/start (17.09s)

                                                
                                    
x
+
TestErrorSpam/status (36.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 status: (12.3893131s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 status: (11.9496523s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 status: (11.9197297s)
--- PASS: TestErrorSpam/status (36.26s)

                                                
                                    
x
+
TestErrorSpam/pause (22.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 pause
E0731 21:47:53.090308   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:53.106321   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:53.122566   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:53.152879   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:53.200563   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:53.295445   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 pause: (7.7476431s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 pause
E0731 21:47:53.468520   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:53.799025   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:54.448798   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:55.743122   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:47:58.317149   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 pause: (7.2303594s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 pause
E0731 21:48:03.440606   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 pause: (7.3614401s)
--- PASS: TestErrorSpam/pause (22.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.18s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 unpause
E0731 21:48:13.686587   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 unpause: (7.2999836s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 unpause: (7.4502359s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 unpause: (7.4290854s)
--- PASS: TestErrorSpam/unpause (22.18s)

                                                
                                    
x
+
TestErrorSpam/stop (55.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 stop
E0731 21:48:34.172620   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 stop: (34.1929126s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 stop
E0731 21:49:15.141716   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 stop: (11.0130165s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-642600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-642600 stop: (10.4323076s)
--- PASS: TestErrorSpam/stop (55.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\12332\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (222.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-457100 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0731 21:50:37.065001   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:52:53.103509   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 21:53:20.908525   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-457100 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m42.1194098s)
--- PASS: TestFunctional/serial/StartWithProxy (222.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (124.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-457100 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-457100 --alsologtostderr -v=8: (2m4.8523864s)
functional_test.go:663: soft start took 2m4.8545369s for "functional-457100" cluster.
--- PASS: TestFunctional/serial/SoftStart (124.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-457100 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 cache add registry.k8s.io/pause:3.1: (8.6929885s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 cache add registry.k8s.io/pause:3.3: (8.7653467s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 cache add registry.k8s.io/pause:latest: (8.5631039s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-457100 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3829952278\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-457100 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3829952278\001: (2.343411s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cache add minikube-local-cache-test:functional-457100
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 cache add minikube-local-cache-test:functional-457100: (8.2853415s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cache delete minikube-local-cache-test:functional-457100
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-457100
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh sudo crictl images
functional_test.go:1124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh sudo crictl images: (9.1534704s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (35.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.2627828s)
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.2733034s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 21:56:25.371550    7432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 cache reload: (8.1934813s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.2344451s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (35.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 kubectl -- --context functional-457100 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (229.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs
E0731 22:04:16.290690   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 22:07:53.117016   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs: (3m49.1958789s)
--- PASS: TestFunctional/serial/LogsCmd (229.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (120.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2435051841\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2435051841\001\logs.txt: (2m0.5585719s)
--- PASS: TestFunctional/serial/LogsFileCmd (120.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (19.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "echo hello"
functional_test.go:1725: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "echo hello": (9.689427s)
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "cat /etc/hostname": (9.6130684s)
--- PASS: TestFunctional/parallel/SSHCmd (19.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (49.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.0157198s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh -n functional-457100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh -n functional-457100 "sudo cat /home/docker/cp-test.txt": (8.8632003s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cp functional-457100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd3933707961\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 cp functional-457100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd3933707961\001\cp-test.txt: (8.9024112s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh -n functional-457100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh -n functional-457100 "sudo cat /home/docker/cp-test.txt": (8.8429512s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (6.9620972s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh -n functional-457100 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh -n functional-457100 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.096799s)
--- PASS: TestFunctional/parallel/CpCmd (49.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12332/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/test/nested/copy/12332/hosts"
functional_test.go:1931: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/test/nested/copy/12332/hosts": (9.5005018s)
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (54.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12332.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/ssl/certs/12332.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/ssl/certs/12332.pem": (8.7675972s)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12332.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /usr/share/ca-certificates/12332.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /usr/share/ca-certificates/12332.pem": (9.0710118s)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/ssl/certs/51391683.0": (8.8728348s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/123322.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/ssl/certs/123322.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/ssl/certs/123322.pem": (8.9824889s)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/123322.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /usr/share/ca-certificates/123322.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /usr/share/ca-certificates/123322.pem": (9.1348133s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.1909103s)
--- PASS: TestFunctional/parallel/CertSync (54.02s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (8.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-457100 ssh "sudo systemctl is-active crio": exit status 1 (8.9375431s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:11:09.650443    5256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (8.94s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (2.7678445s)
--- PASS: TestFunctional/parallel/License (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.4112563s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.95s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.8597034s)
functional_test.go:1315: Took "10.8599168s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "233.1861ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.1479065s)
functional_test.go:1366: Took "10.1480219s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "234.4201ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-457100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9996: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 10548: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 version --short
--- PASS: TestFunctional/parallel/Version/short (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 version -o=json --components: (7.6778142s)
--- PASS: TestFunctional/parallel/Version/components (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.1076046s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-457100
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 update-context --alsologtostderr -v=2: (2.4553397s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 update-context --alsologtostderr -v=2: (2.4647487s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 update-context --alsologtostderr -v=2: (2.4228851s)
E0731 22:22:53.116151   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (120.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image rm kicbase/echo-server:functional-457100 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image rm kicbase/echo-server:functional-457100 --alsologtostderr: (1m0.1590099s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image ls
E0731 22:20:56.309722   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image ls: (1m0.2322832s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (120.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (120.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-457100
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-457100 image save --daemon kicbase/echo-server:functional-457100 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-457100 image save --daemon kicbase/echo-server:functional-457100 --alsologtostderr: (2m0.0704008s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-457100
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (120.47s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.37s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-457100
--- PASS: TestFunctional/delete_echo-server_images (0.37s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-457100
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-457100
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (716.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-207300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0731 22:30:13.354971   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:13.369757   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:13.385224   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:13.416410   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:13.463607   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:13.559444   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:13.733785   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:14.066554   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:14.717170   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:15.998795   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:18.566387   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:23.687434   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:33.933480   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:30:54.429991   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:31:35.403045   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:32:53.130194   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 22:32:57.336913   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:35:13.361380   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:35:41.185540   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 22:37:36.334963   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 22:37:53.135057   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-207300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m20.1435068s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 status -v=7 --alsologtostderr
E0731 22:40:13.372963   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 status -v=7 --alsologtostderr: (36.5460685s)
--- PASS: TestMultiControlPlane/serial/StartCluster (716.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-207300 -- rollout status deployment/busybox: (4.2472619s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-dmsjq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-dmsjq -- nslookup kubernetes.io: (2.0639293s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-f8sql -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-f8sql -- nslookup kubernetes.io: (1.6096428s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-x7dnz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-dmsjq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-f8sql -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-x7dnz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-dmsjq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-f8sql -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-207300 -- exec busybox-fc5497c4f-x7dnz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (264.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-207300 -v=7 --alsologtostderr
E0731 22:42:53.145274   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 22:45:13.367430   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-207300 -v=7 --alsologtostderr: (3m36.2825437s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 status -v=7 --alsologtostderr: (48.3000724s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (264.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-207300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (28.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (28.9484231s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (28.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (623.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 status --output json -v=7 --alsologtostderr
E0731 22:46:36.566717   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 status --output json -v=7 --alsologtostderr: (48.2292685s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp testdata\cp-test.txt ha-207300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp testdata\cp-test.txt ha-207300:/home/docker/cp-test.txt: (9.3659124s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt": (9.349909s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300.txt: (9.2461231s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt"
E0731 22:47:53.139729   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt": (9.3780726s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300:/home/docker/cp-test.txt ha-207300-m02:/home/docker/cp-test_ha-207300_ha-207300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300:/home/docker/cp-test.txt ha-207300-m02:/home/docker/cp-test_ha-207300_ha-207300-m02.txt: (16.5563368s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt": (9.342634s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test_ha-207300_ha-207300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test_ha-207300_ha-207300-m02.txt": (9.2953827s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300:/home/docker/cp-test.txt ha-207300-m03:/home/docker/cp-test_ha-207300_ha-207300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300:/home/docker/cp-test.txt ha-207300-m03:/home/docker/cp-test_ha-207300_ha-207300-m03.txt: (16.1805513s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt": (9.2522138s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test_ha-207300_ha-207300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test_ha-207300_ha-207300-m03.txt": (9.2924511s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300:/home/docker/cp-test.txt ha-207300-m04:/home/docker/cp-test_ha-207300_ha-207300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300:/home/docker/cp-test.txt ha-207300-m04:/home/docker/cp-test_ha-207300_ha-207300-m04.txt: (16.1885304s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test.txt": (9.2886444s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test_ha-207300_ha-207300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test_ha-207300_ha-207300-m04.txt": (9.2784933s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp testdata\cp-test.txt ha-207300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp testdata\cp-test.txt ha-207300-m02:/home/docker/cp-test.txt: (9.2801256s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt": (9.3237863s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300-m02.txt: (9.3103251s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt"
E0731 22:50:13.380580   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt": (9.3736937s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m02:/home/docker/cp-test.txt ha-207300:/home/docker/cp-test_ha-207300-m02_ha-207300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m02:/home/docker/cp-test.txt ha-207300:/home/docker/cp-test_ha-207300-m02_ha-207300.txt: (16.1078004s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt": (9.2136588s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test_ha-207300-m02_ha-207300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test_ha-207300-m02_ha-207300.txt": (9.2306871s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m02:/home/docker/cp-test.txt ha-207300-m03:/home/docker/cp-test_ha-207300-m02_ha-207300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m02:/home/docker/cp-test.txt ha-207300-m03:/home/docker/cp-test_ha-207300-m02_ha-207300-m03.txt: (15.9943156s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt": (9.2663904s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test_ha-207300-m02_ha-207300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test_ha-207300-m02_ha-207300-m03.txt": (9.3272475s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m02:/home/docker/cp-test.txt ha-207300-m04:/home/docker/cp-test_ha-207300-m02_ha-207300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m02:/home/docker/cp-test.txt ha-207300-m04:/home/docker/cp-test_ha-207300-m02_ha-207300-m04.txt: (16.1949915s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test.txt": (9.4054426s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test_ha-207300-m02_ha-207300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test_ha-207300-m02_ha-207300-m04.txt": (9.2947442s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp testdata\cp-test.txt ha-207300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp testdata\cp-test.txt ha-207300-m03:/home/docker/cp-test.txt: (9.6276696s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt": (9.5232373s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300-m03.txt: (9.5166089s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt": (9.4982456s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt ha-207300:/home/docker/cp-test_ha-207300-m03_ha-207300.txt
E0731 22:52:53.151676   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt ha-207300:/home/docker/cp-test_ha-207300-m03_ha-207300.txt: (16.6410661s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt": (9.6033659s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test_ha-207300-m03_ha-207300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test_ha-207300-m03_ha-207300.txt": (9.7309318s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt ha-207300-m02:/home/docker/cp-test_ha-207300-m03_ha-207300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt ha-207300-m02:/home/docker/cp-test_ha-207300-m03_ha-207300-m02.txt: (16.7364849s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt": (9.6450791s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test_ha-207300-m03_ha-207300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test_ha-207300-m03_ha-207300-m02.txt": (9.6578854s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt ha-207300-m04:/home/docker/cp-test_ha-207300-m03_ha-207300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m03:/home/docker/cp-test.txt ha-207300-m04:/home/docker/cp-test_ha-207300-m03_ha-207300-m04.txt: (16.6609719s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt"
E0731 22:54:16.362448   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test.txt": (9.5862765s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test_ha-207300-m03_ha-207300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test_ha-207300-m03_ha-207300-m04.txt": (9.5224541s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp testdata\cp-test.txt ha-207300-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp testdata\cp-test.txt ha-207300-m04:/home/docker/cp-test.txt: (9.4897609s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt": (9.5098779s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4195641153\001\cp-test_ha-207300-m04.txt: (9.609589s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt": (9.7274513s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt ha-207300:/home/docker/cp-test_ha-207300-m04_ha-207300.txt
E0731 22:55:13.382414   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt ha-207300:/home/docker/cp-test_ha-207300-m04_ha-207300.txt: (16.8217067s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt": (9.5831859s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test_ha-207300-m04_ha-207300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300 "sudo cat /home/docker/cp-test_ha-207300-m04_ha-207300.txt": (9.4491855s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt ha-207300-m02:/home/docker/cp-test_ha-207300-m04_ha-207300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt ha-207300-m02:/home/docker/cp-test_ha-207300-m04_ha-207300-m02.txt: (16.6094413s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt": (9.4725949s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test_ha-207300-m04_ha-207300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m02 "sudo cat /home/docker/cp-test_ha-207300-m04_ha-207300-m02.txt": (9.6028108s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt ha-207300-m03:/home/docker/cp-test_ha-207300-m04_ha-207300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 cp ha-207300-m04:/home/docker/cp-test.txt ha-207300-m03:/home/docker/cp-test_ha-207300-m04_ha-207300-m03.txt: (16.7831069s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m04 "sudo cat /home/docker/cp-test.txt": (9.4618102s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test_ha-207300-m04_ha-207300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 ssh -n ha-207300-m03 "sudo cat /home/docker/cp-test_ha-207300-m04_ha-207300-m03.txt": (9.6563778s)
--- PASS: TestMultiControlPlane/serial/CopyFile (623.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (76.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-207300 node stop m02 -v=7 --alsologtostderr: (35.036104s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-207300 status -v=7 --alsologtostderr
E0731 22:57:53.149986   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-207300 status -v=7 --alsologtostderr: exit status 7 (41.3078252s)

                                                
                                                
-- stdout --
	ha-207300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-207300-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-207300-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-207300-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:57:32.041915    2976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0731 22:57:32.134177    2976 out.go:291] Setting OutFile to fd 1328 ...
	I0731 22:57:32.135191    2976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:57:32.135191    2976 out.go:304] Setting ErrFile to fd 1004...
	I0731 22:57:32.135191    2976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:57:32.149179    2976 out.go:298] Setting JSON to false
	I0731 22:57:32.149179    2976 mustload.go:65] Loading cluster: ha-207300
	I0731 22:57:32.149179    2976 notify.go:220] Checking for updates...
	I0731 22:57:32.150192    2976 config.go:182] Loaded profile config "ha-207300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:57:32.150192    2976 status.go:255] checking status of ha-207300 ...
	I0731 22:57:32.151178    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:57:34.591932    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:57:34.591932    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:34.592010    2976 status.go:330] ha-207300 host status = "Running" (err=<nil>)
	I0731 22:57:34.592010    2976 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:57:34.592893    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:57:36.927066    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:57:36.927066    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:36.927066    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:57:39.734829    2976 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:57:39.735185    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:39.735185    2976 host.go:66] Checking if "ha-207300" exists ...
	I0731 22:57:39.748056    2976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:57:39.748056    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300 ).state
	I0731 22:57:42.111713    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:57:42.111910    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:42.112046    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300 ).networkadapters[0]).ipaddresses[0]
	I0731 22:57:44.904135    2976 main.go:141] libmachine: [stdout =====>] : 172.17.21.92
	
	I0731 22:57:44.904135    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:44.905193    2976 sshutil.go:53] new ssh client: &{IP:172.17.21.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300\id_rsa Username:docker}
	I0731 22:57:45.004553    2976 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2564304s)
	I0731 22:57:45.020117    2976 ssh_runner.go:195] Run: systemctl --version
	I0731 22:57:45.046272    2976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:57:45.079465    2976 kubeconfig.go:125] found "ha-207300" server: "https://172.17.31.254:8443"
	I0731 22:57:45.079560    2976 api_server.go:166] Checking apiserver status ...
	I0731 22:57:45.091545    2976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:57:45.141362    2976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2290/cgroup
	W0731 22:57:45.166085    2976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2290/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:57:45.179359    2976 ssh_runner.go:195] Run: ls
	I0731 22:57:45.187608    2976 api_server.go:253] Checking apiserver healthz at https://172.17.31.254:8443/healthz ...
	I0731 22:57:45.193903    2976 api_server.go:279] https://172.17.31.254:8443/healthz returned 200:
	ok
	I0731 22:57:45.194949    2976 status.go:422] ha-207300 apiserver status = Running (err=<nil>)
	I0731 22:57:45.194949    2976 status.go:257] ha-207300 status: &{Name:ha-207300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:57:45.195004    2976 status.go:255] checking status of ha-207300-m02 ...
	I0731 22:57:45.195797    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m02 ).state
	I0731 22:57:47.458087    2976 main.go:141] libmachine: [stdout =====>] : Off
	
	I0731 22:57:47.458087    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:47.458390    2976 status.go:330] ha-207300-m02 host status = "Stopped" (err=<nil>)
	I0731 22:57:47.458390    2976 status.go:343] host is not running, skipping remaining checks
	I0731 22:57:47.458390    2976 status.go:257] ha-207300-m02 status: &{Name:ha-207300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:57:47.458501    2976 status.go:255] checking status of ha-207300-m03 ...
	I0731 22:57:47.459168    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:57:49.763009    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:57:49.763570    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:49.763631    2976 status.go:330] ha-207300-m03 host status = "Running" (err=<nil>)
	I0731 22:57:49.763631    2976 host.go:66] Checking if "ha-207300-m03" exists ...
	I0731 22:57:49.764314    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:57:52.142749    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:57:52.143574    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:52.143682    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:57:54.865563    2976 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:57:54.865563    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:54.865563    2976 host.go:66] Checking if "ha-207300-m03" exists ...
	I0731 22:57:54.878477    2976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:57:54.878477    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m03 ).state
	I0731 22:57:57.245521    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:57:57.245521    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:57:57.245521    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m03 ).networkadapters[0]).ipaddresses[0]
	I0731 22:57:59.998641    2976 main.go:141] libmachine: [stdout =====>] : 172.17.27.253
	
	I0731 22:57:59.998641    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:58:00.000068    2976 sshutil.go:53] new ssh client: &{IP:172.17.27.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m03\id_rsa Username:docker}
	I0731 22:58:00.097061    2976 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2185174s)
	I0731 22:58:00.110707    2976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:58:00.141302    2976 kubeconfig.go:125] found "ha-207300" server: "https://172.17.31.254:8443"
	I0731 22:58:00.141347    2976 api_server.go:166] Checking apiserver status ...
	I0731 22:58:00.154204    2976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:58:00.194486    2976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2311/cgroup
	W0731 22:58:00.218059    2976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2311/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:58:00.231904    2976 ssh_runner.go:195] Run: ls
	I0731 22:58:00.239861    2976 api_server.go:253] Checking apiserver healthz at https://172.17.31.254:8443/healthz ...
	I0731 22:58:00.252894    2976 api_server.go:279] https://172.17.31.254:8443/healthz returned 200:
	ok
	I0731 22:58:00.252894    2976 status.go:422] ha-207300-m03 apiserver status = Running (err=<nil>)
	I0731 22:58:00.252894    2976 status.go:257] ha-207300-m03 status: &{Name:ha-207300-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:58:00.252894    2976 status.go:255] checking status of ha-207300-m04 ...
	I0731 22:58:00.254266    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m04 ).state
	I0731 22:58:02.565843    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:58:02.565843    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:58:02.565943    2976 status.go:330] ha-207300-m04 host status = "Running" (err=<nil>)
	I0731 22:58:02.566034    2976 host.go:66] Checking if "ha-207300-m04" exists ...
	I0731 22:58:02.567057    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m04 ).state
	I0731 22:58:04.955480    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:58:04.955480    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:58:04.955623    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m04 ).networkadapters[0]).ipaddresses[0]
	I0731 22:58:07.804844    2976 main.go:141] libmachine: [stdout =====>] : 172.17.23.92
	
	I0731 22:58:07.804844    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:58:07.804844    2976 host.go:66] Checking if "ha-207300-m04" exists ...
	I0731 22:58:07.818984    2976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:58:07.818984    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-207300-m04 ).state
	I0731 22:58:10.232810    2976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 22:58:10.232810    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:58:10.232810    2976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-207300-m04 ).networkadapters[0]).ipaddresses[0]
	I0731 22:58:13.024108    2976 main.go:141] libmachine: [stdout =====>] : 172.17.23.92
	
	I0731 22:58:13.024108    2976 main.go:141] libmachine: [stderr =====>] : 
	I0731 22:58:13.024108    2976 sshutil.go:53] new ssh client: &{IP:172.17.23.92 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-207300-m04\id_rsa Username:docker}
	I0731 22:58:13.131654    2976 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.3126019s)
	I0731 22:58:13.147395    2976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:58:13.176134    2976 status.go:257] ha-207300-m04 status: &{Name:ha-207300-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (76.35s)

                                                
                                    
x
+
TestJSONOutput/start/Command (244.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-629100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0731 23:07:53.158562   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 23:10:13.398053   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-629100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m4.4702051s)
--- PASS: TestJSONOutput/start/Command (244.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.03s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-629100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-629100 --output=json --user=testUser: (8.0266281s)
--- PASS: TestJSONOutput/pause/Command (8.03s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.9s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-629100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-629100 --output=json --user=testUser: (7.8950142s)
--- PASS: TestJSONOutput/unpause/Command (7.90s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (40.23s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-629100 --output=json --user=testUser
E0731 23:10:56.390253   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-629100 --output=json --user=testUser: (40.2291247s)
--- PASS: TestJSONOutput/stop/Command (40.23s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.45s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-308500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-308500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (273.3542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fa42f7f9-b60e-4730-b536-2d1454f4c8bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-308500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"762ca078-e215-41f9-b6c0-d6e4105ac172","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"95d9ec53-c612-48ec-9cf8-aeab64966e90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ac098106-b44b-4e05-ba54-60902bbcfdaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"b2c065e6-2c33-42ae-90bf-73b5177a7bdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"c67bf70e-dbc6-4d4d-abcb-b7a78755966e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cc4a2ac1-7915-4767-b6ee-714cb0f77d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:11:43.271756    8788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-308500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-308500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-308500: (1.1741666s)
--- PASS: TestErrorJSONOutput (1.45s)

                                                
                                    
x
+
TestMainNoArgs (0.25s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.25s)

                                                
                                    
x
+
TestMinikubeProfile (519.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-989500 --driver=hyperv
E0731 23:12:53.167290   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-989500 --driver=hyperv: (3m18.494837s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-989500 --driver=hyperv
E0731 23:15:13.399492   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 23:17:53.164412   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-989500 --driver=hyperv: (3m21.4838153s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-989500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.8073023s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-989500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (18.9980041s)
helpers_test.go:175: Cleaning up "second-989500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-989500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-989500: (40.4886441s)
helpers_test.go:175: Cleaning up "first-989500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-989500
E0731 23:19:56.598447   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 23:20:13.404140   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-989500: (39.2100514s)
--- PASS: TestMinikubeProfile (519.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (149.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-526800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-526800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m28.0104414s)
E0731 23:22:53.168791   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
--- PASS: TestMountStart/serial/StartWithMountFirst (149.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-526800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-526800 ssh -- ls /minikube-host: (9.2505863s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (150.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-526800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0731 23:25:13.399553   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-526800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m29.4902099s)
--- PASS: TestMountStart/serial/StartWithMountSecond (150.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-526800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-526800 ssh -- ls /minikube-host: (9.2696988s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (29.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-526800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-526800 --alsologtostderr -v=5: (29.6627884s)
--- PASS: TestMountStart/serial/DeleteFirst (29.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-526800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-526800 ssh -- ls /minikube-host: (9.1916053s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.19s)

                                                
                                    
x
+
TestMountStart/serial/Stop (29.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-526800
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-526800: (29.3490436s)
--- PASS: TestMountStart/serial/Stop (29.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (113.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-526800
E0731 23:27:36.412976   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 23:27:53.176328   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-526800: (1m52.8271221s)
--- PASS: TestMountStart/serial/RestartStopped (113.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.2s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-526800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-526800 ssh -- ls /minikube-host: (9.2034125s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.20s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (433.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-411400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0731 23:30:13.410720   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
E0731 23:32:53.181226   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 23:35:13.408224   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-411400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m50.1957177s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 status --alsologtostderr
E0731 23:36:36.614387   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 status --alsologtostderr: (23.6124738s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (433.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- rollout status deployment/busybox: (3.3944521s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-4hgmz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-4hgmz -- nslookup kubernetes.io: (2.1770138s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-lxslb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-4hgmz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-lxslb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-4hgmz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-411400 -- exec busybox-fc5497c4f-lxslb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.50s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (233.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-411400 -v 3 --alsologtostderr
E0731 23:37:53.175642   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0731 23:40:13.408889   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-411400 -v 3 --alsologtostderr: (3m18.156933s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 status --alsologtostderr: (35.1468111s)
--- PASS: TestMultiNode/serial/AddNode (233.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-411400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.473942s)
--- PASS: TestMultiNode/serial/ProfileList (9.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (355.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 status --output json --alsologtostderr: (35.0589593s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp testdata\cp-test.txt multinode-411400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp testdata\cp-test.txt multinode-411400:/home/docker/cp-test.txt: (9.2431176s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test.txt": (9.1866279s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1438759977\001\cp-test_multinode-411400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1438759977\001\cp-test_multinode-411400.txt: (9.1839444s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test.txt"
E0731 23:42:53.176149   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test.txt": (9.2347022s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400:/home/docker/cp-test.txt multinode-411400-m02:/home/docker/cp-test_multinode-411400_multinode-411400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400:/home/docker/cp-test.txt multinode-411400-m02:/home/docker/cp-test_multinode-411400_multinode-411400-m02.txt: (16.0904026s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test.txt": (9.3513845s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test_multinode-411400_multinode-411400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test_multinode-411400_multinode-411400-m02.txt": (9.3131495s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400:/home/docker/cp-test.txt multinode-411400-m03:/home/docker/cp-test_multinode-411400_multinode-411400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400:/home/docker/cp-test.txt multinode-411400-m03:/home/docker/cp-test_multinode-411400_multinode-411400-m03.txt: (16.0734765s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test.txt": (9.1810994s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test_multinode-411400_multinode-411400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test_multinode-411400_multinode-411400-m03.txt": (9.1276305s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp testdata\cp-test.txt multinode-411400-m02:/home/docker/cp-test.txt
E0731 23:44:16.428717   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp testdata\cp-test.txt multinode-411400-m02:/home/docker/cp-test.txt: (9.2567109s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test.txt": (9.1909726s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1438759977\001\cp-test_multinode-411400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1438759977\001\cp-test_multinode-411400-m02.txt: (9.1559595s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test.txt": (9.2694257s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt multinode-411400:/home/docker/cp-test_multinode-411400-m02_multinode-411400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt multinode-411400:/home/docker/cp-test_multinode-411400-m02_multinode-411400.txt: (16.2757825s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test.txt": (9.424593s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test_multinode-411400-m02_multinode-411400.txt"
E0731 23:45:13.423397   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test_multinode-411400-m02_multinode-411400.txt": (9.3516901s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt multinode-411400-m03:/home/docker/cp-test_multinode-411400-m02_multinode-411400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m02:/home/docker/cp-test.txt multinode-411400-m03:/home/docker/cp-test_multinode-411400-m02_multinode-411400-m03.txt: (16.6860378s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test.txt": (9.4373692s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test_multinode-411400-m02_multinode-411400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test_multinode-411400-m02_multinode-411400-m03.txt": (9.2410836s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp testdata\cp-test.txt multinode-411400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp testdata\cp-test.txt multinode-411400-m03:/home/docker/cp-test.txt: (9.0874959s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test.txt": (9.3248225s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1438759977\001\cp-test_multinode-411400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1438759977\001\cp-test_multinode-411400-m03.txt: (9.2325179s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test.txt": (9.4390543s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt multinode-411400:/home/docker/cp-test_multinode-411400-m03_multinode-411400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt multinode-411400:/home/docker/cp-test_multinode-411400-m03_multinode-411400.txt: (16.3738731s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test.txt": (9.3458645s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test_multinode-411400-m03_multinode-411400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400 "sudo cat /home/docker/cp-test_multinode-411400-m03_multinode-411400.txt": (9.4763508s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt multinode-411400-m02:/home/docker/cp-test_multinode-411400-m03_multinode-411400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 cp multinode-411400-m03:/home/docker/cp-test.txt multinode-411400-m02:/home/docker/cp-test_multinode-411400-m03_multinode-411400-m02.txt: (16.2650018s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m03 "sudo cat /home/docker/cp-test.txt": (9.3832483s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test_multinode-411400-m03_multinode-411400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 ssh -n multinode-411400-m02 "sudo cat /home/docker/cp-test_multinode-411400-m03_multinode-411400-m02.txt": (9.277057s)
--- PASS: TestMultiNode/serial/CopyFile (355.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (75.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 node stop m03
E0731 23:47:53.187117   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 node stop m03: (24.3269286s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-411400 status: exit status 7 (25.6538186s)

                                                
                                                
-- stdout --
	multinode-411400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-411400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-411400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:48:07.832801   12596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-411400 status --alsologtostderr: exit status 7 (25.5147932s)

                                                
                                                
-- stdout --
	multinode-411400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-411400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-411400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 23:48:33.498757    4064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0731 23:48:33.587289    4064 out.go:291] Setting OutFile to fd 1576 ...
	I0731 23:48:33.588307    4064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:48:33.588307    4064 out.go:304] Setting ErrFile to fd 1332...
	I0731 23:48:33.588307    4064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:48:33.607227    4064 out.go:298] Setting JSON to false
	I0731 23:48:33.607371    4064 mustload.go:65] Loading cluster: multinode-411400
	I0731 23:48:33.607437    4064 notify.go:220] Checking for updates...
	I0731 23:48:33.608541    4064 config.go:182] Loaded profile config "multinode-411400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 23:48:33.608652    4064 status.go:255] checking status of multinode-411400 ...
	I0731 23:48:33.609555    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:48:35.768685    4064 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:48:35.769719    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:35.769719    4064 status.go:330] multinode-411400 host status = "Running" (err=<nil>)
	I0731 23:48:35.769811    4064 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:48:35.770409    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:48:37.926290    4064 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:48:37.926290    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:37.926290    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:48:40.466756    4064 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:48:40.466756    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:40.466878    4064 host.go:66] Checking if "multinode-411400" exists ...
	I0731 23:48:40.480046    4064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 23:48:40.480046    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400 ).state
	I0731 23:48:42.588077    4064 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:48:42.588739    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:42.588739    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400 ).networkadapters[0]).ipaddresses[0]
	I0731 23:48:45.073068    4064 main.go:141] libmachine: [stdout =====>] : 172.17.20.56
	
	I0731 23:48:45.073178    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:45.073581    4064 sshutil.go:53] new ssh client: &{IP:172.17.20.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400\id_rsa Username:docker}
	I0731 23:48:45.179877    4064 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6997711s)
	I0731 23:48:45.193190    4064 ssh_runner.go:195] Run: systemctl --version
	I0731 23:48:45.214838    4064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:48:45.241813    4064 kubeconfig.go:125] found "multinode-411400" server: "https://172.17.20.56:8443"
	I0731 23:48:45.241897    4064 api_server.go:166] Checking apiserver status ...
	I0731 23:48:45.252509    4064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:48:45.292056    4064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2165/cgroup
	W0731 23:48:45.308623    4064 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 23:48:45.318612    4064 ssh_runner.go:195] Run: ls
	I0731 23:48:45.326218    4064 api_server.go:253] Checking apiserver healthz at https://172.17.20.56:8443/healthz ...
	I0731 23:48:45.334697    4064 api_server.go:279] https://172.17.20.56:8443/healthz returned 200:
	ok
	I0731 23:48:45.335198    4064 status.go:422] multinode-411400 apiserver status = Running (err=<nil>)
	I0731 23:48:45.335198    4064 status.go:257] multinode-411400 status: &{Name:multinode-411400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 23:48:45.335198    4064 status.go:255] checking status of multinode-411400-m02 ...
	I0731 23:48:45.336105    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:48:47.470853    4064 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:48:47.470853    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:47.471353    4064 status.go:330] multinode-411400-m02 host status = "Running" (err=<nil>)
	I0731 23:48:47.471353    4064 host.go:66] Checking if "multinode-411400-m02" exists ...
	I0731 23:48:47.472256    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:48:49.587274    4064 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:48:49.587274    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:49.587660    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:48:52.120807    4064 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:48:52.120807    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:52.120807    4064 host.go:66] Checking if "multinode-411400-m02" exists ...
	I0731 23:48:52.132832    4064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 23:48:52.133010    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m02 ).state
	I0731 23:48:54.218045    4064 main.go:141] libmachine: [stdout =====>] : Running
	
	I0731 23:48:54.218458    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:54.218574    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-411400-m02 ).networkadapters[0]).ipaddresses[0]
	I0731 23:48:56.682533    4064 main.go:141] libmachine: [stdout =====>] : 172.17.28.42
	
	I0731 23:48:56.682533    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:56.683387    4064 sshutil.go:53] new ssh client: &{IP:172.17.28.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-411400-m02\id_rsa Username:docker}
	I0731 23:48:56.772504    4064 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6396127s)
	I0731 23:48:56.785581    4064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:48:56.807186    4064 status.go:257] multinode-411400-m02 status: &{Name:multinode-411400-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 23:48:56.807186    4064 status.go:255] checking status of multinode-411400-m03 ...
	I0731 23:48:56.808060    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-411400-m03 ).state
	I0731 23:48:58.869367    4064 main.go:141] libmachine: [stdout =====>] : Off
	
	I0731 23:48:58.869367    4064 main.go:141] libmachine: [stderr =====>] : 
	I0731 23:48:58.869906    4064 status.go:330] multinode-411400-m03 host status = "Stopped" (err=<nil>)
	I0731 23:48:58.869906    4064 status.go:343] host is not running, skipping remaining checks
	I0731 23:48:58.869906    4064 status.go:257] multinode-411400-m03 status: &{Name:multinode-411400-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (75.50s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (192.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 node start m03 -v=7 --alsologtostderr
E0731 23:50:13.414595   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 node start m03 -v=7 --alsologtostderr: (2m36.8519027s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-411400 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-411400 status -v=7 --alsologtostderr: (35.2296997s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (192.25s)

                                                
                                    
x
+
TestPreload (493.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-838400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0801 00:02:53.202534   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
E0801 00:05:13.426449   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-838400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m58.7536171s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-838400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-838400 image pull gcr.io/k8s-minikube/busybox: (8.2975984s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-838400
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-838400: (38.6052576s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-838400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0801 00:07:53.204044   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-838400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m35.1099996s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-838400 image list
E0801 00:09:56.651381   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-838400 image list: (7.7370039s)
helpers_test.go:175: Cleaning up "test-preload-838400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-838400
E0801 00:10:13.442985   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-838400: (44.9270645s)
--- PASS: TestPreload (493.43s)

                                                
                                    
x
+
TestScheduledStopWindows (340.61s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-585100 --memory=2048 --driver=hyperv
E0801 00:12:53.201110   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-608900\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-585100 --memory=2048 --driver=hyperv: (3m25.139037s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-585100 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-585100 --schedule 5m: (11.2518112s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-585100 -n scheduled-stop-585100
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-585100 -n scheduled-stop-585100: exit status 1 (10.02598s)

                                                
                                                
** stderr ** 
	W0801 00:14:19.284916    8196 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-585100 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-585100 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.9298274s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-585100 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-585100 --schedule 5s: (11.1657406s)
E0801 00:15:13.443584   12332 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-457100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-585100
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-585100: exit status 7 (2.538996s)

                                                
                                                
-- stdout --
	scheduled-stop-585100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0801 00:15:50.440091    3888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-585100 -n scheduled-stop-585100
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-585100 -n scheduled-stop-585100: exit status 7 (2.4820839s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0801 00:15:52.979214   10036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-585100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-585100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-585100: (28.0534091s)
--- PASS: TestScheduledStopWindows (340.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-271800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-271800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (396.7657ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-271800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0801 00:16:23.523644   11968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    

Test skip (30/195)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-457100 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-457100 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 13028: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (6.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-457100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-457100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0403417s)

                                                
                                                
-- stdout --
	* [functional-457100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:03.380731   13264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0731 22:10:03.472731   13264 out.go:291] Setting OutFile to fd 1040 ...
	I0731 22:10:03.472731   13264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:03.473734   13264 out.go:304] Setting ErrFile to fd 1092...
	I0731 22:10:03.473734   13264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:03.501052   13264 out.go:298] Setting JSON to false
	I0731 22:10:03.504614   13264 start.go:129] hostinfo: {"hostname":"minikube6","uptime":539745,"bootTime":1721924058,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:10:03.504614   13264 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:10:03.510966   13264 out.go:177] * [functional-457100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:10:03.516618   13264 notify.go:220] Checking for updates...
	I0731 22:10:03.519616   13264 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:10:03.525409   13264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:10:03.528422   13264 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:10:03.533401   13264 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:10:03.537392   13264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:10:03.541396   13264 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:10:03.543418   13264 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:980: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-457100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-457100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0373377s)

                                                
                                                
-- stdout --
	* [functional-457100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0731 22:10:08.434145    5800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0731 22:10:08.521737    5800 out.go:291] Setting OutFile to fd 1080 ...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.522738    5800 out.go:304] Setting ErrFile to fd 1096...
	I0731 22:10:08.522738    5800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:10:08.556931    5800 out.go:298] Setting JSON to false
	I0731 22:10:08.562892    5800 start.go:129] hostinfo: {"hostname":"minikube6","uptime":539750,"bootTime":1721924058,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0731 22:10:08.562892    5800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 22:10:08.569887    5800 out.go:177] * [functional-457100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0731 22:10:08.574255    5800 notify.go:220] Checking for updates...
	I0731 22:10:08.578803    5800 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0731 22:10:08.582305    5800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:10:08.586191    5800 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0731 22:10:08.593286    5800 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:10:08.596569    5800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:10:08.601023    5800 config.go:182] Loaded profile config "functional-457100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 22:10:08.602547    5800 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1025: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard